Technology expert Evan Schuman takes an authoritative look at the faults and foibles of enterprise IT.
When Macy’s on Wednesday reported more details about the “hiding” of $151 million, it became clear their accounting controls simply didn’t work. It exposed a massive software hole in just about every enterprise environment.
The TikTok owner fired — and then sued — an intern for ‘deliberately sabotaging’ its LLM. This sounds more like a management failure, and a lesson for IT that LLM guardrails are a joke.
Enterprise CIOs today allow any user to choose pretty much any freeware browser they want — and then use it to access their most sensitive systems. Does anyone see a problem here?
In a perfect universe, the persuasiveness of an argument would not be based mostly on who said it. In the world we live in, though, it is. And it’s hard to find a less credible entity to create a genAI accuracy test than OpenAI.
A group of Harvard students experimented with AI-linked eyeglasses, offering a powerful peek into the AI nightmares coming for IT in 2025.
The adoption of generative AI is moving too quickly — and its dangers remain too unknown — for any meaningful rules to be put in place on AI vendors. Regulating enterprises makes far more sense; influence enterprise behavior and the vendo....
Just about every generative AI vendor offers enterprise CIOs all kinds of promises about the technology. But they’re talking up 2026 capabilities when trying to make 2024 sales. That’s a recipe for disaster for both buyer and seller.
It’s all a matter of understanding how your business can benefit from generative AI tools and platforms. But first, you need to make some difficult decisions — and then hope genAI doesn’t self-destruct.
CIOs are so desperate to stop generative AI hallucinations they’ll believe anything. Unfortunately, Agentic RAG isn’t new and its abilities are exaggerated.
In many ways, the rush to try out still-evolving generative AI tools really does feel like the Wild West. Business execs need to slow things down.
Generative AI advocates say genAI tools can catch errors made by other genAI tools — but humans must still check the AI checkers’ work.
If you can't trust the product, can you trust the vendor behind it?
Corporate privacy policies are supposed to reassure customers that their data is safe. So why are companies listing every possible way they can use that data?
It’s bad enough when an employee goes rogue and does an end-run around IT; but when a vendor does something similar, the problems could be broadly worse.
Given the plethora of privacy rules already in place in Europe, how are companies with shiny, new, not-understood genAI tools supposed to comply? (Hint: they can’t.)
Why are so many companies sending out emails to customers that look like phasing attempts? Don't they pay attention to their own security efforts?
When McDonald's in March suffered a global outage preventing it from accepting payments, it issued a lengthy statement about the incident that was vague, misleading and yet still allowed many of the technical details to be figured out.
Wall Street’s obsession with quarterly earnings has made it extraordinarily difficult for most enterprises to spend on long-term investments, or even mid-term investments.
Ever use one of those mobile food delivery apps — only to realize your delivery person isn't who you expected? There's a lesson here about identity, authentication, and what happens when the best laid tech plan meets human beings.
The IT community is freaking out about AI data poisoning. For some, it’s a sneaky backdoor into enterprise systems as it surreptitiously infects the data LLM systems train on — which then get sucked into enterprise systems.
Sponsored Links