close
close

Why I can’t stop writing about Elon Musk | Technology

Why I can’t stop writing about Elon Musk | Technology

“I “Hope I don’t have to cover Elon Musk for a while,” I thought last week after sending TechScape out to readers. Then I got a message from the news editor. “Can you keep an eye on Elon Musk’s Twitter feed this week?”

I ended up reading the world’s most powerful posting addict in great detail and my brain turned liquid and dripped from my ears:

On his shortest night’s break on Saturday night, he logged off after retweeting a meg comparing London’s Metropolitan Police to the Nazi SS, but went back online four and a half hours later to retweet a crypto influencer complaining about prison sentences for Britons who took part in protests.

But somehow I was still surprised by what I found. I knew the broad outlines of Musk’s online presence from years of reporting on him: a tripartite split between promoting his real companies Tesla and SpaceX, eagerly reposting cheap nerd humor, and increasingly right-wing political agitation.

But following Musk in real time reveals how his chaotic mode has been distorted by his shift to the right. His advertising for Tesla is increasingly infused with culture war terms. The Cybertruck in particular is promoted in language that makes it seem as though buying one could help defeat the Democrats in the US presidential election in November. The trash humor mentioned above is mixed with anger at the world for not thinking he’s the coolest person. And the right’s political agitation is becoming more and more extreme.

Musk’s involvement in the UK unrest seems to have driven him deeper into the arms of the far right than ever before. This month he tweeted for the first time to Lauren Southern, a far-right Canadian internet personality best known in the UK for getting a visa ban from Theresa May’s government because of her Islamophobia. He doesn’t just tweet: he also supports her financially, sending her around £5 a month through Twitter’s subscription feature. Then there was the headline-grabbing retweet of the co-chair of Britain First. In itself, that could have been put down to Musk not knowing what pond he was swimming in; two weeks later, the pattern is clearer. These are his people now.

Well, then it’s ok

A nice example of the difference between scientific press releases and scientific articles from the world of AI. The press release from the University of Bath:

AI does not pose an existential threat to humanity, according to a new study.

LL.M. graduates have a superficial ability to follow instructions and excellent language skills, but they lack the potential to learn new skills without explicit instruction. This means that they remain inherently controllable, predictable and secure.

This means that they remain fundamentally controllable, predictable and safe.

The paper by Lu et al:

It has been claimed that large language models comprising billions of parameters and pre-trained on extensive web-based corpora acquire certain skills without being specifically trained for them… We present a novel theory that explains emergent skills by taking into account their potential confounding factors and support this theory using over 1,000 experiments. Our results suggest that supposedly emergent skills are not truly emergent, but are the result of a combination of contextual learning, model memory, and linguistic knowledge.

Our work is a fundamental step in explaining the power of language models. It provides a blueprint for their efficient use and highlights the paradox of their ability to excel in some cases but fail in others, demonstrating that their capabilities should not be overestimated.

The press release for this story went viral for predictable reasons: Everyone loves to see the titans of Silicon Valley brought to their knees, and the existential risk of AI has become a polarizing issue in recent years.

But the paper is much less than what the university’s press office wants to say. That is a pity, because what the paper does The show is definitely interesting and important. There is a lot of emphasis on so-called “emergent” capabilities in frontier models: tasks and skills that were not present in the training data, but that the AI ​​system shows in practice.

These emergent capabilities are a concern for people concerned about existential risk, because they suggest that AI safety is harder to guarantee than we would like. If an AI can do something it wasn’t trained to do, there is no easy way to guarantee the safety of a future AI system: you can leave things out of the training data, but it might still figure out how to do them.

The paper shows that, at least in some situations, these emergent capabilities are nothing of the sort. Rather, they are the result of what happens when you take an LLM like GPT and hammer it into the form of a chatbot before asking it to solve problems in the form of a question-and-answer conversation. This process, the paper says, means that the chatbot can never really be asked “zero-shot” questions for which it has no prior training: the art of motivating ChatGPT is essentially a matter of teaching it a little about what form the answer should take.

That’s an interesting result! While it’s not proof that the AI ​​apocalypse is impossible, it is – if you want good news – proof that it’s unlikely to happen tomorrow.

Training pain

Nvidia is accused of “unjust enrichment”. Photo: Dado Ruvic/Reuters

Nvidia used YouTube to train its AI systems. This is now taking its toll:

Skip newsletter promotion

A federal lawsuit alleges that Nvidia, which focuses on developing chips for AI, used videos from YouTube creator David Millette for its AI training work. The lawsuit accuses Nvidia of “unjust enrichment and unfair competition” and seeks a class action lawsuit to include other YouTube content creators with similar claims.

According to the lawsuit, filed Wednesday in the Northern District of California, Nvidia illegally “saved” YouTube videos to train its Cosmos AI software. Nvidia used software on commercial servers to evade detection by YouTube and download “approximately 80 years’ worth of video content per day,” the lawsuit says, citing an Aug. 5 404 media report.

This lawsuit is unusual in the AI ​​world because Nvidia has been somewhat secretive about the sources of its training data. Most AI companies that have faced lawsuits have been open and proud about their disregard for copyright restrictions. One example is Stable Diffusion, which sourced its training data from the open-source LAION dataset. Now:

(Judge) Orrick found that the artists had reasonably argued that the companies had infringed their rights by illegally storing works, and that Stable Diffusion, the AI ​​image generator in question, may have been based “to a significant extent on copyrighted works” and “designed to facilitate that infringement.”

Of course, not all AI companies are equal here. Google has a unique advantage: everyone gives the company permission to train its AI using their material. Why? Because otherwise you will be completely excluded from the search:

Many website owners say they can’t afford to stop Google’s AI from summarizing their content.

That’s because the Google tool that crawls web content to find its AI answers is the same one that tracks web pages for search results, the publishers say. Blocking Alphabet Inc.’s Google in the same way websites have blocked some of their AI rivals would also hurt a site’s ability to be found online.

Ask me anything

What was I thinking? Ask me this and any other technical questions.

One more smug note. After 11 years, I’m leaving the Guardian at the end of this month, and on 2 September I’ll be appearing on TechScape for the last time. I’ll be answering readers’ questions, big and small, as I leave, so if you’ve ever wanted an answer to anything, from tech advice to industry gossip, hit reply and drop me an email.

The broader TechScape

TikTok bores you. Photo: Jag Images/Getty Images/Image Source

Leave a Reply

Your email address will not be published. Required fields are marked *