top of page

Promises, Lies and an Insight into the Artificial Intelligence Market

Updated: 1 day ago

This text was originally published in Brazilian Portuguese


I decided to summarize my perception of these last months invested in understanding the Artificial Intelligence market. This is the first article on the subject.


TLDR for those of you who don't want to venture into the text:


The promise of AI: transforming the world for the better, technological advances in science, manufacturing, robotics and giving people more time to do work that really matters: work of creativity, strategy and the future.


The reality of AI products: most companies are glossing over their products, without necessarily having any knowledge about artificial intelligence to meet market demand. As in 2021 with web3 and blockchain, everything is now AI - we are no longer “Digital” companies, we are now AI companies. The actual results of launched products have fallen far short of expectations, and disappointment has been constant, in addition to the use of illegal practices in training neural networks. There is a growing feeling of distrust in the companies that are emerging and in what the players are launching: it is getting ugly.


The market’s real intention: Clearly, we are gradually automating simple tasks, gaining greater productivity awareness so that we can reduce staff. Layoffs continue to occur and executives are still reluctant to say that AI exists to reduce the size of the company’s workforce. This is the main goal, but it cannot be said openly, for fear of retaliation.


Few companies actually decide and make money with AI: the shovel salesmen.



The Promise of Artificial Intelligence


I believe we have all heard about the possibilities that AI brings to the world, and how our society will be transformed by the use of technology. There is a lot of speculation about how far the power of artificial intelligence can go, and what kind of implications it will have in areas such as medicine, robotics, finance and politics.


It's worth venturing into these conversations and learning what futurists like Amy Webb are saying about the positive side of the future. I've already written about one of her main books , describing the market and research in artificial intelligence. I want to present two views along these lines, through videos.


Cathie Wood presents Ark Invest ’s vision for the next 5 to 10 years, and how the convergence of AI, Robotics, Energy Storage, Blockchain and Multi-Omic Sequencing can accelerate global growth through joint and disruptive innovation. The Big Ideas 2024 report is a great piece of material produced by them, outlining what to expect in the year. Important: Ark’s predictions rarely come true, and the investment management company’s position based on disruptive innovation is harshly criticized in the market.





The vision of artificial intelligence is sometimes confused with science fiction, since for centuries we have dreamed of autonomous machines capable of feeling, thinking and supporting humans in their activities. It is worth learning a little about these visions being brought to life in real-world issues. I spent a lot of time researching the topic, and it is an interesting exercise to improve your perspective. But remember: everything you hear about the future, especially long-term product roadmaps, is speculation, and often does not come to fruition.






The reality of the market


Are we in a bubble? The more I learn about the market, the more I am convinced that, if we are not yet in a bubble, we are certainly on the curve of exaggerated expectations and overhype. I am not alone in this, and the latest launches in the world of AI show me that companies continue to deceive the market with mediocre products, unrealistic expectations and a hype machine to make short-term stock appreciation possible.


In my experience, while still leading Lambda3, I have had numerous opportunities to discuss and support companies and teams in delivering AI solutions, trusting our cases as representations of what is possible to do in a more pragmatic and direct way with Machine Learning and Computer Vision. I intend to address this in a specific article, about what has actually worked with AI, and how to take the technology seriously. At the moment I am interested in a more general overview of the market.


In short, we are on an upward curve of AI Washing and inflated expectations. Products empowered by the unbridled marketing of a market that has become accustomed to low interest rates in the years , fueled by companies with FOMO this modern gold rush. Every self-respecting company has some initiative with AI being promoted, even if the initiative is just learning how to use an API to generate texts for a website, or putting a bot online. AI Washing is when companies manipulate product information to position their features in the middle of the current market wave. I have seen an exaggerated fascination with tools that generate impressive results at first, but that do not hold up to a more detailed analysis, putting the operation and teams that maintain their products at risk. The case of Air Canada is exemplary .


Among the recent big moments that have moved the market, I want to highlight some examples that you probably already know: Amazon Just Walk Out , Google Gemini Demo , Devin developer Startup and Humane AI Companion . The marketing around these products was immense, with big statements and followed by a relative increase in the valuation of these companies, if it weren't for the detail that none of them were telling the truth about the products. Everything was somehow a scam: none of these tools is or continues to be completely autonomous. Most of the demos are rehearsed and made up to overestimate the capabilities of the tools, with a clear speculative intention. In the AI gold rush, selling a dream that no one is capable of delivering continues to be a winning strategy to secure Venture Capital investment.




The case of Amazon, with its Just Walk Out division, is an important example, mainly so that we can adjust our expectations about the use of these technologies. Even after many years, and dozens of stores operating, the company still needs more than a thousand employees to label and evaluate 70% of transactions through human operators! It is far from being an efficient or autonomous process. The dream of a technology that simply works, without human support, is far from the truth.


I am concerned that this care and attention to detail to ensure that the entire service works is only used when the financial risk is directly for the companies. If it were not for the fact that an AI failure causes losses for Amazon, by failing to invoice products, the technology platform would certainly be left to run without much scrutiny: the financial results of these solutions can only be achieved with human intervention, otherwise the technology will not stand up. Inflating expectations of the tools can blind executives who expect to use the same platforms without necessarily having the same budget to maintain an army of annotators to evaluate the AI results.


Data annotation work is the mainstay of AI infrastructure today. It is not known for sure, but it is estimated that millions of professionals are involved in classification and annotation tasks for AI algorithms. Companies like CloudFactory and Scale AI employ people around the world to keep algorithms running, with very low salaries. Welcome to the AI sweet shops . Helping algorithms to correctly identify elements is a hard and tedious job of evaluating and correcting errors in algorithm identification.


Often seen as low-level work with an end date, this work is known as monotonous and of minor importance, and is treated with secrecy by all companies developing solutions today. Since artificial intelligence systems are fragile in reality, this work has no end date.


We even have a term for it - Brittle AI : like in 2018, when an Uber self-driving car killed a woman because the computer vision algorithm, despite being able to recognize a person riding a bike, had not been trained on someone pushing a bike while crossing the street. Preparing for these borderline cases is what makes Annotators a fundamental part of the entire AI market. It’s not going away any time soon.


Every current system that uses AI uses the work of annotators: spam classification, evaluation of sexual content in ads, relationship between credit card transactions, chatbot service messages, food classification so that the Smart Refrigerator recognizes packaging, etc. Every serious system in production needs this type of work, to increase the accuracy of the algorithms, receiving close to $1 per hour.


The Verge writes in a long article about the challenges of this market, which is the most prominent in AI, but which almost no one talks about, due to the stigma and also the competitive advantage it brings to any model: greater ability to create annotations to train models translates into accuracy in the results. I highly recommend reading it.


Every AI operation needs a labeling and correction team by its side. No system will be completely autonomous, not forever, if the company expects accuracy from the model. What happens to the autonomous agents we are using in products that will not have the budget necessary to validate what is generated or decided through AI? Who pays the price for such errors ?


Google employs over 10,000 people in its Search Quality Raters division, which evaluates and weights its search ranking algorithm: Note that these people spend their days evaluating websites and adjusting the rankings made by the algorithm, including detailed Guidelines for their work: What does the company do? It describes its algorithm as not requiring any human intervention. Link to the Guide





Furthermore, the market is already aware that the company falsified the results when presenting its Multi-Modal AI Gemini, in a truly impressive video in which a person naturally talks to the program and even creates a game with it using post-its. The video, the result and the interaction with Gemini are all rehearsed and altered. The technology is not close to what was presented.


 

Devin is a lie


After a really great launch, it became clear that the product that claimed to be a new developer on a software team was lying. Devin, a startup that launches a product for the dev market, was greatly exaggerating in all the publications of the Results.


What helps valuation the most in the short term:

  • Being pragmatic with the tool itself by presenting the benefits and risks of the platform?

  • Say you no longer need a software developer and risk going viral?

The company opted for the second option, overshadowing the impressive work of its development team, and learned the hard way.


Obviously, Devin is not going to directly replace a programmer on the team. Any company that describes their tool that way is clearly lying. Even if the ultimate intention is to replace jobs, it won’t be that way.





Humane AI Pin


Humane pin fiasco
Humane pin fiasco

One market trend is AI-powered wearables, described in great detail in the Future Today Institute Report . Perhaps the most famous product of recent times has been the Humane Ai Pin. Intended to become an indispensable companion and make your cell phone obsolete, Humane integrates a camera and a projector as its input mechanisms. In recent months, the specialized media has really fallen in love with the product, setting high expectations. The problem: the product is a disaster, and the latest reviews have been terrible. It won't be the dream of having a Jarvis by your side all the time.




 

Generating a weird future



LLMs and Generative AI in general are the most discussed platforms in the AI business world, but this does not mean that the products are effective. The problem is that the apparent intelligence and acuity of these solutions always end up generating bad results and with real impacts for those who use them in serious solutions. This is yet another reason for note-takers to have work to do.


Its indiscriminate use is already becoming standard, even with the many examples of problems that these systems can generate. I described some examples in my last post about the TDC Summit AI 2024 , it's worth checking out.


As another illustrative example, it is worth mentioning the Australian case in which a group of academics used Bard to write a complaint to Congress asking for greater ethics and professionalism in the accounting management of consulting firms, citing examples of reprehensible and corrupt acts. The detail: the algorithm invented information that never happened, as is common for LLMs, implicating KPMG and Deloitte at a national level. It was a huge mess 👌.


"I asked GPT about pizza, and he described its taste so perfectly that I am now fully convinced that he has actually eaten pizza, and is conscious of his sensory experience of eating it!" Sounds absurd? Isn't it...

One challenge for us technologists in the use of artificial intelligence is to maintain a skeptical and pragmatic bias while the entire IT and entertainment market points to AGI just around the corner. Unfortunately, we are giving way to beliefs and delusions very similar to the irony created by Professor Subbarao Kambhampati .


Despite being a picturesque example, I have spoken to people who are already planning to try swapping technology teams for one of the copilot tools, in the belief that they will handle software development 🤷🏾♂️


Intelligence and the imitation of intelligence have an increasingly narrow and difficult to identify frontier. As LLMs are trained to simulate reasoning, bringing impressive results, such models show no capacity for reason or self-criticism, relying only on memory and pattern recognition. In fact, the professor himself presents his argument quite forcefully in the class “Can LLMs reason?”




Tech.co has been keeping an up-to-date list of mistakes and failures made by AI and related companies in the field. They can’t possibly write down everything, but expect to find only the biggest mistakes there. There’s a lot more going on.


I can't imagine what could happen in the case of São Paulo, where the government hopes to use the tool to support and accelerate school planning.





As magical as it may seem, Generative AI is a very useful technology that is already being used in good cases. The problem lies in its construction: the use of intellectual property without authorization. The most famous tools are increasingly in the spotlight of lawsuits for unauthorized use of images, and the cases keep appearing. Here we have an ethical and criminal dilemma: How many intellectual property crimes are we going to accept so that we can generate images of a panda riding a bicycle in the middle of the forest?




This new set of accusations could significantly change the direction of the GenAI imaging industry. And it is a time of much discussion about regulating the use of these platforms:





What concerns me most at the moment is related to automatic video generation. We are about to enter another challenge with Generative AI, which needs to be discussed in the broader context of society, with the launch of DeepFake platforms at a commercial level.



Although this is a great merit of the work of these research teams, I wonder for what purpose? I can even mention:

  • Memorial for family members to talk to deceased people

  • Digital twins to facilitate the process of generating videos, trainings, and advertisements without having to record them in person

  • Memes and jokes on the internet (I love it)

When I think about the implications and risks that we already have with deepfakes, I am interested in knowing how we will deal with them:

  • political smear campaigns

  • target marketing with the intention of destabilizing political groups

  • fraud to open accounts and easy recognition (there are already projects bypassing almost all current security tools)

  • minor WhatsApp scams, money extortion, and family abuse

  • endorsement by public figures of scams and malicious websites (if Pedro Bial is saying it then it must be trustworthy)

Analyze the video from the perspective of your relative who is not tech-savvy: do you really believe it will be easy to identify a video like this as fake?


The hole seems bottomless…


And we can already see the result everywhere. As I was writing this article, every day a new tool or new problem came up for discussion.


Netflix is using GenAI tools to manipulate a True Crime documentary about Jennifer Pan, who is accused of ordering the murder of her parents. The intention suggested in the manipulated images is that Jennifer was a cheerful, confident and happy woman - the images seem to have been created with that intention. Not so “true” after all…


Among the various ethical issues of image manipulation in the production of a documentary that purports to be about Real Crimes, I can also mention:

- This is rewriting history. A documentary may seem innocent enough to commit this kind of action, but it is not. The images should have at least some kind of watermark

- express consent to image change

- Transparency - notify in credits




 

The real intention behind using AI




All the discourse from top executives regarding the use of AI is always focused on expanding human capacity and the potential for multiplying the results that AI brings to each individual. I myself have described the technology this way. However, in reality, what we see is a different intention.


“Jobs are being lost to AI, but empowerment is the story to tell. Instead of editor Mary losing her job, her company will train her on an AI tool that generates first drafts, takes approved deliverables and converts them for use in catalog, web and social, and speeds up the process for other tasks. Mary’s manager expects her to produce three times as much content in the same time.” - Corporate Ozempic , Prof Galloway


Instead of teams of 5 people producing work, with the incorporation of AI practices and tools and the maturation of the usage ecosystem, we will soon have teams of 2 people generating the work of 7. Ultimately, this is the main objective, and with it the increase in the profitability of companies - but no CEO is willing to take on this plan at the moment.


Over time, we’ll see executives attributing their companies’ success to being able to reduce their workforces and leverage AI platforms that augment the work of the few remaining individuals who can perform non-automated tasks. We’re already seeing CEOs at IBM and UPS saying they’re cutting back on hiring for roles that could potentially be replaced by AI—both companies have already cut thousands of jobs in recent years.


A study by McKinsey on the impact of AI estimates that at least 12 million workers will have to change their job position by 2030 and that lower-paying jobs will be the most impacted. Customer Service, Sales and Office Support will be the most impacted. However, science, healthcare and construction will be the least impacted, given the characteristics defined by the Baumol Effect . In the end, caring for patients and laying a slab will still be done by people.


Fortunately, at least some companies do not completely hide the extent of their goals, presenting tools that imitate human processes in a very dystopian way. Although the marketing is different, the expectation is quite clear to expand the capacity of the hiring process by imitating human interviewers, in a rather crude simplification of the daily work of this type of professional.


In the unrealistic pursuit of profit, and under the guise of increasing human capacity, we are actually working directly towards a world quite different from the ideals presented at the beginning of this text.


ColdFusion made a great video that served as a basis for me to delve deeper into the topics discussed here. It's worth watching.





 

Who really makes money from AI


In the gold rush, only those who sell shovels make money. Those who embark on a barefoot mountain in search of their dream have a much greater chance of failing. Currently, AI is being defined by a few companies and universities, and the two biggest pillars for the evolution of technology are Processing Power and Development Community. Make no mistake: those who sell you the best integration tools are only expecting your company to become dependent on the platform.


It is no wonder that the main and largest companies involved in AI are also cloud companies. AI startups will always have some player related to infrastructure in their Captable: training AI models is expensive. Not only in processing capacity, but in the consumption of water for cooling and energy to keep data centers running . Starting to better understand the demands of clusters to maintain and evolve models like GPT-6, I begin to doubt the solution as minimally viable in the coming years.


Training people to create solutions is also expensive. It is no wonder that NVidia, Microsoft, Google, Tencent and Alibaba have solutions for all types of companies that want to use their artificial intelligence tools, and invest heavily in development tooling: it is very unlikely that a company will develop the next processing infrastructure and tooling to train a new model without it being supervised or executed within some cloud provider.


The vertical integration of AI, as described in Amy Webb's book, occurs as follows:


- Invest heavily in training professionals as early as possible with niche knowledge of tools (e.g. Microsoft Connect, Google Cloud), and ensure that whenever a new person enters the market they are co-opted into that platform and remain in the niche community

- Investment and partnership programs so that companies learn how to use the platforms and recommend their use to their customers

- Deliver large tools, certification programs, partnership programs and awards to People and Companies that present cases using the platforms and pre-determined models (Azure Cognitive Services, NVIDIA NGC, Alibaba Cloud)

- Provide the technical ecosystem with the lowest possible barrier to entry (such as Meta AI and Llama 3)

- Ensure new companies and products are launched with cloud incentives

- Fight for dominance in the ecosystem of professionals, companies and partners


OpenAI GPTs, Tencent AI Lab, Amazon Bedrock exist so that you can't escape these solutions to produce your new product. Once you're in, you'll hardly be free. When NVIDIA's CEO says we don't need to teach our kids to code anymore, he's more interested in waving to investors and generating buzz than actually telling the truth.


Obviously, companies that sell data for AI (Data Annotations), which are behind all the platforms such as MechanicalTurks, CloudFactory, Scale AI, RemoTasks among others. As an example, Scale AI was valued in 2021 at more than $7 billion.


These are really the companies generating value with AI and investing heavily for the future, and even if everything created in AI goes wrong, they still come out ahead.


Thanks!


 


If you’ve read this far: doubt everything I’ve written, do your own research, read the articles I’ve suggested. We need many more AI-literate professionals to work in the next decade.


I wrote this article to delve deeper into the examples I have seen in the market and how this technology, so important for the future, is transforming our perception of the world. As a technologist, I believe it is part of my responsibility to warn against the error of blindly accepting the hype and promises of tool manufacturers (shovel sellers), and to help prepare our market to be more skeptical.





1 view

Comments


bottom of page