In today’s issue we talk about some of the jobs AI could take away from us in the nearest future, and what kind of heavier burden it will leave behind for us to pick. If we are expected to hand over to our AI agents all of the small and time-consuming tasks, for example, what will be left for us to do? Will our workday and workload be only riddled with unsolvable problems? In how many hours should we solve them and with what degree of job satisfaction will this eventually happen? And then, what will happen when an AI will recreate a movie with Sinatra and Tupac? Would we trust a person with a degree over a chatbot? Will we flatten our AI-enhanced work towards neurotypical outputs?
The conversation, as always, is open.
Midjourney/prompt: "many watches in a human form are running after a person vibe. there is sunlight" - Variations (Subtle)
On training the algorithm and its human user
The impact of AI over corporate workforce: the McKinsey report
McKinsey just released a report about the economic impact of generative AI, writing profusely about AI task augmentation versus AI task automation. The importance of the report lies in both the technique used and in the results given.
The report is based on 16 business functions and examined through the company proprietary database. Based on historical analysis of various technologies, they modeled a range of adoption timelines from 8 to 27 years between the beginning of adoption and its plateau, using sigmoidal curves. Many natural processes, such as those of complex system learning curves, exhibit a progression from small beginnings that accelerates and approaches a climax over time. In such cases, data scientists use a sigmoid function if no other modeling is available.The time range they chose accounts for the many factors that could affect the pace at which adoption occurs, including regulation, levels of investment, and management decision-making within firms.
The main outcome of the report is that AI will improve dramatically the productivity of high-level knowledge workers. Such improved labor productivity has the potential of generating gains in the order of roughly 400+ billions separately in the areas of: marketing, sales, software engineering, and product R&D.
There’s a price to pay or, some could say, a bottleneck: we can achieve this only if energy and time will be spent on training such professionals, starting now.
Current generative AI and other technologies have the potential to automate work activities that absorb 60 to 70 percent of employees’ time today. In contrast, we previously estimated that technology has the potential to automate half of the time employees spend working. The acceleration in the potential for technical automation is largely due to generative AI’s increased ability to understand natural language, which is required for work activities that account for 25 percent of total work time. Thus, generative AI has more impact on knowledge work associated with occupations that have higher wages and educational requirements than on other types of work.
Would an AI cross a picket line?
Lest one has been living under a rock - and in that case, I hope you are enjoying your summer holiday – you know that writers in Hollywood have been on a strike. Then the actors joined. It all started about wage fairness and streaming rights, but they are all blaming AI now. What happened?
According to the Guardian: “A major flashpoint of the first week of the actors’strike was a comment from Duncan Crabtree-Ireland, Sag-Aftra’s chief negotiator, who said that studios had “proposed that our background performers should be able to be scanned, get paid for one day’s pay, and their company should own that scan, their image, their likeness, and to be able to use it for rest of eternity, on any project they want, with no consent and no compensation”. In a statement, the Alliance of Motion Picture and Television Producers (AMPTP) has disputed this accusation and committed to ask for prior consent for specific uses of an actor likeness and to compensation for each use.
There are many viewpoints to consider.
The issue of contractual consent for rights over image manipulation with AI, which is apparently now a given in our future, is bigger than what is making the news. If professional human likeness, as in the case of actors/actresses, could be used to create commercial products then, appropriate compensation must be in order. And if consent is problematic for living people, what about dead people who appeared in orphan works or public domain works without ever having the chance to say a thing about that. What will protect the dead from deepfakes, or from the unforeseen event that their likeness would be fed to train an algorithm?
Justine Bateman, an actor, producer, writer and computer scientist adds more fuel to the conversation:
Generative AI can only function if you feed it a bunch of material. In the film business, it constitutes our past work. Otherwise, it’s just an empty blender and it can’t do anything on its own. (…) But when I could see that it was going to be used to widen profit margins, in white-collar jobs and more generally replace human expression with our past human expression, I just went, “This is an end of the progression of society if we just stayed here.” If you keep recycling what we’ve got from the past, nothing new will ever be generated. If generative AI started in the beginning of the 20th century, we would never have had jazz, rock ’n’ roll, film noir. That’s what it stops. There are some useful applications to it — I don’t know of that many —but pulling it into the arts is absolutely the wrong direction.
Dr. Google meets Dr. Chatbot
Nowadays search engines play a role in our health-related decisions because we google our symptoms before deciding when to see a doctor. The future integration of LLM-chatbots into search engines could have a potentially disruptive effect: since a chatbot imitates a real person conversational style, its “human touch” could lead to an increase in the perception of credibility of the answers given.
At the moment AI is notorious for producing texts not consistent with reality and with confused facts, due to the phenomenon known as “hallucination”. A hallucination is a confident response by an AI that does not seem to be justified by its training data. For example, according to Wikipedia, a hallucinating chatbot might, when asked to generate a financial report for Tesla, falsely state that Tesla's revenue was $13.6 billion (or some other random number apparently "plucked from thin air"). They are a consistent issue in LLM-chatbot systems and we will probably never get rid of them. For this reason, legislators are already taking precautions on the matter, and the EU is drafting, aside from a Regulation, a new Directive that updates the existing rules on product liability to reflect just that. However, the legislation is too slow and the conversation needs to reach the masses before they start using LLM-chatbots as a diagnostic tools and start to trust them blindly.
The medical sector is already plagued by a lot of distrust and disinformation, and the conversation is just starting to challenge the enthusiasm of new business ideas.
Purposely, in a paywalled article published in Nature, “Large language model AI chatbots require approval as medical devices”, several scholars cast a warning:
“Chatbots powered by artificial intelligence used in patient care are regulated as medical devices, but their unreliability precludes approval as such.”
Language, Bias and Images: Different Ways to Frame AI
Humans Are Biased. Generative AI Is Even Worse.
The title may be drastic, and the topic is not new, but the data illustrates it clearly. Humans are inherently biased, and artificial intelligence is no exception; in fact, it often exacerbates the issue. This analysis of images generated by Stable Diffusion demonstrates a noticeable amplification of stereotypes. The crucial question is: Who should bear the responsibility for this? Is it the dataset providers, the model trainers, or the creators themselves?
Read on Bloomberg (by Leonardo Nicoletti and Dina Bass)
The way we talk about artificial intelligence affects our understanding
We need to approach the supersonic pace of AI advancements with balance. Language plays a crucial role in this context: Megan Rashid points out that during moments of hype, language bordering on fanaticism risks lowering our threshold of attention and disqualifying critical or problematizing opinions, or even those simply less enthusiastic, dismissed as detractors of innovation.
The fundamental problem with cultish language is that it inflates expectations about artificial intelligence. When these expectations are not met, trust in AI decreases. There's also the issue of cognitive load:
The low-hanging fruit will be picked first. As AI automates dull and monotonous tasks, humans will be left with more complex and difficult work. While there are clear pros and cons to this tradeoff, the risk could be that we become overloaded. This cognitive overload could actually increase the likelihood of making errors. When the stakes are high like in healthcare or terrorism monitoring, these errors could have serious impacts.
According to the author, balance is the key to successfully managing the integration of these technologies into our lives, at all levels. Language is a fundamental piece of this puzzle.
A better angle to frame AI problems
Because I am not only autistic but a highly visual thinker, meaning comes to me first in images. My mind is like a simulator that runs the equivalent of videos, showing how things happen, especially how they may go wrong. It’s the antithesis of the highly abstract reasoning that led to the radical breakthroughs in A.I. But it’s visual thinkers like me who are specially equipped to make sure that this progress doesn’t come at too high a toll.
Temple Grandin proposes an unconventional perspective to reflect on how to manage potential risks surrounding artificial intelligence: she does not refer to job losses - which technology has always created and destroyed - but to cybersecurity risks. That's why she believes that thinking about artificial intelligence not in abstract terms but as physical infrastructure can provide us with better angles to frame the problem.