[04/2023] On Being Too Optimist or Not At All
Back to when we had everything before us, and we had nothing before us
Because this is the week where we are in low tones getting back to work, or counting the last days before the end of the summer-break across the Northern hemisphere, we prepared a collection of stories that are quietly happening now and will develop into something bigger during the rest of the year, so you’re up to date and ahead of the curve.
What will happen to those outsourced customer service workers now that LLMs can tell you without supervision how to solve any of your concerns?
Also, what is our role as users in the materialization of AI political biases?
We are developing means to retrieve correct information when we look for it. Bill Gates however, who has been foreshadowing and mantic since day one, suggests we should also make sure correct information reaches people who are not intentionally motivated to look for it.
In a twist of fate during the last weeks, Copyright protection seemed to be gaining traction against Ethics in the hot topics of the legal news. And the moment is pivotal. While all the media industries are negotiating the rights over their content with AI startups to train their algorithms, one single newspaper, the New York Times, is opposing resistance and threatening to destroy any dataset that used its content.
Will the Times succeed? If yes, it might be just in time (pun intended) before bigger players like Alphabet and Meta buy out the AI startups and get all their dataset for good, and cheaply.
And, one last question: can a generative AI-based system own rights on its output?
You just have to read on to find out, and follow.
Midjourney/prompt: "AI as a painter painting a picture"
It was the best of Times, it was the worst of Times
New York Times could be paving new ways for legal protection against machine-written content
Newspapers and media companies have been fiercely fighting a battle for copyright protection in the digital environment for years and they do not want to be unprepared for what the world will look like when everything will be AI-powered.
In US they have been busy in negotiations with tech companies over the use of their content in AI models. All except one, The New York Times, which is making sure AI does not become competition. “If, when someone searches online, they are served a paragraph-long answer from an AI tool that refashions reporting from the Times, the need to visit the publisher's website is greatly diminished”. A potential copyright lawsuit could clarify whether scraping content to train an algorithm would fall into fair use or not, as we have already seen in the past. And what would happen if the New York Times would be granted copyright protection against AI training on their own content?
According to Arstechnica
If OpenAI is found to have violated any copyrights in this process, federal law allows for the infringing articles to be destroyed at the end of the case. In other words, if a federal judge finds that OpenAI illegally copied the Times' articles to train its AI model, the court could order the company to destroy ChatGPT's dataset, forcing the company to recreate it using only work that it is authorized to use.”
AI models cannot be registered as legitimate authors of their own output because they are not, guess what, humans
Copyright infringement is, with Ethics issues, one of the main legal hot topics about AI. In the last days a court in US reached a ruling stating that an AI model cannot enjoy copyright protection over its outputs because AI models are not humans.
Was this intuitive? According to Stephen Thaler, plaintiff in this ruling, absolutely not. He filed protection to the US Copyright Office in an attempt to list his own AI system as the sole creator of an artwork called "A Recent Entrance to Paradise", but the protection was not granted. He then went on and sought further judicial review, but was unsuccessful. Judge Howell upheld the Copyright Office's decision to reject Thaler's copyright application because humans are an "essential part of a valid copyright claim" and "human authorship is a bedrock requirement of copyright."
From the Hollywood Reporter:
Copyright law has “never stretched so far” to “protect works generated by new forms of technology operating absent any guiding human hand,” U.S. District Judge Beryl Howell found.[…]
AI not profitable yet, what will happen to the AI startups in the next years?
While we talk a lot about the technology, what will happen to the companies that are currently popularizing LLM models to us? They are packing users, brand equity and market share, but what about their earnings? Where does their money come from now and what can we expect in the next future?
The question is now more important than ever since behemotic players such as Alphabet, Microsoft and Meta – to name the usual few – are also throwing almighty dollars to several projects and could quickly buy them out or launch more innovative products in the next days, dissipating all the smaller players’ efforts.
OpenAI, Anthropic, Inflection, Cohere and Hugging Face have been causing ferment in the venture capital world. While some AI companies may go public, it’s more likely that startups making innovative new products powered by AI will be bought by bigger companies. Will any of them make it to an IPO?
Investopedia spoke with some experts:
Generally, tech companies will need to have about $100 million in revenue and to have been operating at least 10 years before they are ready to go public.
“It is no longer a market that values growth irrespective of profitability,” Hornik said. “So any company hoping to go public will not only need to be of a meaningful scale and growing rapidly but will need to demonstrate economic efficiency and a path to profitability. That is a very tall order.”
Ritter said there is no rush for companies to go public, especially as they are developing new products because that's the stage at which venture capitalists can have their most effective impact.
“There’s so much venture capital available. And there’s a lot of value added that a quality venture capital company can bring,” Ritter said. “Why spend lots of effort appeasing public markets? Once venture capitalists aren’t adding value anymore, and the company is sufficiently mature, then it could be time for an IPO.”
Risks and Biases: Are We Missing Something Being Optimists?
The workers at the frontlines of the AI revolution
Since the blockbuster launch of ChatGPT at the end of 2022, future-of-work pontificators, AI ethicists, and Silicon Valley developers have been fiercely debating how generative AI will impact the way we work. Some six months later, one global labor force is at the frontline of the generative AI revolution: offshore outsourced workers.
These include workers hired per commission or on a contractual basis, such as freelance copywriters, artists, and software developers, as well as more formal offshore workforces like customer service agents. As generative AI tools present a new model for cost cutting, pressure is quickly mounting for these outsourced workers to adapt or risk losing work.
Rest of World spoke to outsourced workers across different industries and regions to understand how generative AI changes the demand for their work and the stability of their income.
AI language models are rife with different political biases
As AI language models are rolled out into products and services used by millions of people, understanding their underlying political assumptions and biases could not be more important. That’s because they have the potential to cause real harm. A chatbot offering health-care advice might refuse to offer advice on abortion or contraception, or a customer service bot might start spewing offensive nonsense.
You might have already heard about the research that analyzes the political biases of 14 language models: you can find a useful summary on MIT Technology Review. I recommend also reading Does ChatGPT have a liberal bias? on
, to better understand how the interaction between individuals and artificial intelligence is also part of the game when it comes to influencing the outcomes.The risks of AI are real but manageable (?)
Bill Gates writes «there’s a good reason to think that we can deal with [the risks of AI]: This is not the first time a major innovation has introduced new threats that had to be controlled. We’ve done it before.» Does it sounds a bit too optimistic? I agree. Still I find some intriguing points in it. About education, for instance:
We do need to make sure that education software helps close the achievement gap, rather than making it worse. Today’s software is mostly geared toward empowering students who are already motivated. It can develop a study plan for you, point you toward good resources, and test your knowledge. But it doesn’t yet know how to draw you into a subject you’re not already interested in. That’s a problem that developers will need to solve so that students of all types can benefit from AI.
Read the full article, and let us know what you think about it.