Slate Auto, a firm backed in part by Amazon founder Jeff Bezos, is unveiling a low-cost electric truck that can also change into an SUV. Its starting price point: $20,000 after federal EV incentives. "A radically simple electric pickup truck that can change into whatever you need it to be — even an SUV," the Slate Auto website says. "Made in the USA at a price that’s actually affordable (no really, for real)." The two-door version can be changed into a 5-seat SUV. The baseline truck is small: About two-thirds the size of a Chevy Silverado EV and about seven-eights the size of a Ford Maverick. It has a payload capacity of 1,400 pounds compared the Maverick's 1500 pounds. At less than 15 feet long, Slate says its more akin to a 1985 Toyota pickup.
BEIJING — In the global race to produce robots that are smarter and faster, China’s humanoids have come a long way. Robots from cutting-edge Chinese companies can dance and spin or do roundhouse kicks, as they have demonstrated in videos that are all over China’s internet. Yet when humanoids and robots were invited to join real flesh-and-blood runners for a half-marathon in Beijing this past weekend, the race garnered attention but also laid bare the challenges still facing the industry as China seeks to dominate technologies of the future. Some of the robots barely got started. One, designed with a woman’s body and face, collapsed moments after getting started, sending a group of engineers rushing to its side with laptops. Another that was mounted to a platform with propellers crashed into a barrier. A robot the size of a young child succumbed to a glitch and simply lay down on the starting line.
LONDON — European Union watchdogs fined Apple and Meta hundreds of millions of euros Wednesday as they stepped up enforcement of the 27-nation bloc’s digital competition rules. The European Commission imposed a 500 million euro ($571 million) fine on Apple for preventing app makers from pointing users to cheaper options outside its App Store. The commission, which is the EU’s executive arm, also fined Meta Platforms 200 million euros because it forced Facebook and Instagram users to choose between seeing personalized ads or paying to avoid them. The punishments were smaller than the blockbuster multibillion-euro fines that the commission has previously slapped on Big Tech companies in antitrust cases. Apple and Meta have to comply with the decisions within 60 days or risk unspecified “periodic penalty payments,” the commission said. The decisions were expected to come in March, but the self-imposed deadline slipped amid an escalating trans-Atlantic trade war with U.S. President Donald Trump, who has repeatedly complained about regulations from Brussels affecting American companies. The penalties were issued under the EU’s Digital Markets Act, also known as the DMA. It’s a sweeping rulebook that amounts to a set of do’s and don’ts designed to give consumers and businesses more choice and prevent Big Tech “gatekeepers” from cornering digital markets. The DMA seeks to ensure “that citizens have full control over when and how their data is used online, and businesses can freely communicate with their own customers,” Henna Virkkunen, the commission’s executive vice president for tech sovereignty, said in a statement. “The decisions adopted today find that both Apple and Meta have taken away this free choice from their users and are required to change their behavior,” Virkkunen said. Both companies indicated they would appeal. Apple accused the commission of “unfairly targeting” the iPhone maker and said it “continues to move the goal posts” despite the company’s efforts to comply with the rules. Meta Chief Global Affairs Officer Joel Kaplan said in a statement that the “Commission is attempting to handicap successful American businesses while allowing Chinese and European companies to operate under different standards.” In a press briefing in Brussels, commission spokespeople sought to tamp down concerns that the penalties would inflame trade tensions. “We don’t care who owns a company. We don’t care where the company is located,” commission spokesperson Thomas Regnier told reporters. “We are totally agnostic on that front from a European Union.”
Joy Cardaño said she used to get commissioned almost every week to create anime-inspired art. Now, she said, that work has nearly come to a halt, with many online users seeming to gravitate toward artificial intelligence-made art, instead. From Studio Ghibli-inspired illustrations to doll and action figure “starter packs,” an explosion of AI-generated images in recent weeks has sparked a fresh wave of concern among artists like Cardaño, who argue that using AI undermines the importance of trained artists and takes away their commission opportunities. “People who use it [AI generators] should be respectful of artists,” Cardaño, who goes by Joyblivion on Instagram, said in an email, calling the trend “so unethical.” “Even if the artists are vocal about how they don’t want their art to be used, they refuse to listen. I think whoever uses it or is thinking of using it should research how it impacts the art community.” Many in the art community echo the sentiment as they continue to monitor the latest advancements in AI, including the recent rollout of OpenAI’s GPT- 4o, which can generate text, images and audio. ChatGPT users are able to generate images using the model for free. The rest of its capabilities are for paid users only, with membership prices starting at $20 a month. Cardaño, 30, who is based in the Philippines, said she has been a full-time artist since she graduated from college. She primarily sells her work on INPRNT, an online shop. Her commissioned pieces usually start at $100. After having seen the virality of the Ghibli trend, she took to Instagram to highlight her past work in hope of swaying people to pay for art, instead. “Studio Ghibli fan art that I drew with my own hands without needing AI,” she wrote in an April 1 post, accompanied by a sample of her work.
Alphabet’s Google illegally dominated two markets for online advertising technology, a judge ruled Thursday, dealing another blow to the tech giant and paving the way for U.S. antitrust prosecutors to seek a breakup of its advertising products. U.S. District Judge Leonie Brinkema in Alexandria, Virginia, found Google liable for “willfully acquiring and maintaining monopoly power” in markets for publisher ad servers and the market for ad exchanges, which sit between buyers and sellers. Websites use publisher ad servers to store and manage their ad inventories. Antitrust enforcers failed to prove a separate claim that Google had a monopoly in advertiser ad networks, she wrote. Lee-Anne Mulholland, Google’s vice president of regulatory affairs, said Google will appeal the ruling. “We won half of this case and we will appeal the other half,” she said in a statement, adding that the company disagrees with the decision about its publisher tools. “Publishers have many options and they choose Google because our ad tech tools are simple, affordable and effective." Google’s shares were down around 2.1% at midday. The decision clears the way for another hearing to determine what Google must do to restore competition in those markets, such as sell off parts of its business at another trial that has yet to be scheduled. The Justice Department has said Google should have to sell off at least its Google Ad Manager, which includes the company’s publisher ad server and ad exchange. However, a Google representative said Thursday that Google was optimistic it would not have to divest part of the business as part of any remedy, given the court’s view that its acquisition of advertising tech companies like DoubleClick were not anticompetitive.
New Jersey has sued the social gaming platform Discord for allegedly failing to adequately protect underage users from predators, the first state to do so. The heavily redacted civil suit, filed Thursday, accuses Discord of violating the New Jersey Consumer Fraud Act by making it easy for children to create an account and by not taking more steps to prevent adult users from finding and contacting minors. Discord is designed as a hub for gamers to chat through text, audio and video, and has become popular as an app for chatting while playing video games, including ones particularly popular with children like Roblox and Minecraft. Calling itself a “fun and safe space for teens,” Discord bans anyone under the age of 13, and says it has a zero-tolerance policy toward people who exploit minors. New Jersey’s attorney general, Matthew Platkin, accused Discord of not making it hard enough for children under 13 to get on the platform and for predators to find and contact underage users. “They’ve waged a very extensive PR campaign to tell the public all the features that they put in place to protect kids on their app,” Platkin told NBC News. “They know that they’re not working, and they know that they’re not actually protecting kids the way they say they are.” In an emailed statement, a Discord spokesperson defended the company’s measures against child exploitation. “Discord is proud of our continuous efforts and investments in features and tools that help make Discord safer,” the spokesperson said. “Given our engagement with the Attorney General’s office, we are surprised by the announcement that New Jersey has filed an action against Discord today. We dispute the claims in the lawsuit and look forward to defending the action in court,” the spokesperson said.
Nvidia plans to produce AI supercomputer chips entirely in the United States for the first time. The semiconductor maker said in a blog post Monday that it had commissioned more than 1 million square feet of manufacturing space to build and test its Blackwell chips in Phoenix and is building supercomputer plants in Houston and Dallas. Nvidia said it would take at least a year to reach mass production scale at both plants. At the same time, Nvidia said its Blackwell chips have already started production at Phoenix chip plants run by Taiwan Semiconductor Manufacturing Co., a major semiconductor foundry.
Microsoft terminated the employment of two software engineers who protested at company events on Friday over the Israeli military’s use of the company’s artificial intelligence products, according to documents viewed by CNBC. Ibtihal Aboussad, a software engineer in the company’s AI division who is based in Canada, was fired on Monday over “just cause, wilful misconduct, disobedience or wilful neglect of duty,” according to one of the documents. Another Microsoft software engineer, Vaniya Agrawal, had said she would resign from the company on Friday, April 11. But Microsoft terminated her role on Monday, according to an internal message viewed by CNBC. The company wrote that it “has decided to make your resignation immediately effective today.” Both employees chose Microsoft’s 50th anniversary event to publicly voice their criticism. What Microsoft had hoped would be a celebratory period has turned into a brutal few days for the company, which is being hit, along with the rest of the market, by President Donald Trump’s widespread tariffs. It’s a topic that CEO Satya Nadella and his two predecessors, Bill Gates and Steve Ballmer, were forced to uncomfortably confront on Friday in an interview with CNBC’s Andrew Ross Sorkin. “As a Microsoft shareholder, this kind of thing is not good,” Ballmer said, about the tariffs. Meanwhile, the celebration itself captured headlines more for the protesters’ shared message than for Microsoft’s half-century of accomplishments. Microsoft didn’t immediately provide a comment.
LONDON — Artificial intelligence that can match humans at any task is still some way off — but it’s only a matter of time before it becomes a reality, according to the CEO of Google DeepMind. Speaking at a briefing in DeepMind’s London offices on Monday, Demis Hassabis said that he thinks artificial general intelligence (AGI) — which is as smart or smarter than humans — will start to emerge in the next five or 10 years. “I think today’s systems, they’re very passive, but there’s still a lot of things they can’t do. But I think over the next five to 10 years, a lot of those capabilities will start coming to the fore and we’ll start moving towards what we call artificial general intelligence,” Hassabis said. Hassabis defined AGI as “a system that’s able to exhibit all the complicated capabilities that humans can.” “We’re not quite there yet. These systems are very impressive at certain things. But there are other things they can’t do yet, and we’ve still got quite a lot of research work to go before that,” Hassabis said. Hassabis isn’t alone in suggesting that it’ll take a while for AGI to appear. Last year, the CEO of Chinese tech giant Baidu Robin Li said he sees AGI is “more than 10 years away,” pushing back on excitable predictions from some of his peers about this breakthrough taking place in a much shorter timeframe. Some time to go yet Hassabis’ forecast pushes the timeline to reach AGI some way back compared to what his industry peers have been sketching out. Dario Amodei, CEO of AI startup Anthropic, told CNBC at the World Economic Forum in Davos, Switzerland in January that he sees a form of AI that’s “better than almost all humans at almost all tasks” emerging in the “next two or three years.” Other tech leaders see AGI arriving even sooner. Cisco’s Chief Product Officer Jeetu Patel thinks there’s a chance we could see an example of AGI emerge as soon as this year. “There’s three major phases” to AI, Patel told CNBC in an interview at the Mobile World Congress event in Barcelona earlier this month. “There’s the basic AI that we’re all experience right now. Then there is artificial general intelligence, where the cognitive capabilities meet those of humans. Then there’s what they call superintelligence,” Patel said. “I think you will see meaningful evidence of AGI being in play in 2025. We’re not talking about years away,” he added. “I think superintelligence is, at best, a few years out.” Artificial super intelligence, or ASI, is expected to arrive after AGI and surpass human intelligence. However, “no one really knows” when such a breakthrough will happen, Hassabis said Monday. Last year, Tesla CEO Elon Musk predicted that AGI would likely be available by 2026, while OpenAI CEO Sam Altman said such a system could be developed in the “reasonably close-ish future.”
Federal prosecutors are appealing a federal judge’s ruling in Wisconsin that possessing child sexual abuse material created by artificial intelligence is in some situations protected by the Constitution. The order and the subsequent appeal could have major implications for the future legal treatment of AI-generated child sexual abuse material, or CSAM, which has been a top concern among child safety advocates and has become a subject of at least two prosecutions in the last year. If higher courts uphold the decision, it could cut prosecutors off from successfully charging some people with the private possession of AI-generated CSAM. The case centers on Steven Anderegg, 42, of Holmen, Wisconsin, whom the Justice Department charged in May with “producing, distributing, and possessing obscene visual depictions of minors engaged in sexually explicit conduct and transferring obscene material to a minor under the age of 16.” Prosecutors alleged that he used an AI image generator called Stable Diffusion to create over 13,000 images depicting child sexual abuse by entering text prompts into the technology that then generated fake images depicting non-real children. (Some AI systems are also used to create explicit images of known people, but prosecutors do not claim that is what Anderegg was doing.) In February, in response to Anderegg’s motion to dismiss the charges, U.S. District Judge James D. Peterson allowed three of the charges to move forward but threw one out, saying the First Amendment protects the possession of “virtual child pornography” in one’s home. On March 3, prosecutors appealed. In the decision, Peterson denied Anderegg’s request to dismiss charges of distribution of an obscene image of a minor, transfer of obscene matter to a person under 16 and production of an image of a minor engaging in sexually explicit conduct. Anderegg’s lawyer did not respond to a request for comment. The Justice Department declined to comment. Many AI platforms have tried to prevent their tools from being used in creating such content, but some safety guardrails can easily be modified or removed, and a July study from the Internet Watch Foundation found that the amount of AI-generated CSAM posted online is increasing.