THE HAGUE, Netherlands — The European Union’s law enforcement agency cautioned Tuesday that artificial intelligence is turbocharging organized crime that is eroding the foundations of societies across the 27-nation bloc as it becomes intertwined with state-sponsored destabilization campaigns. The grim warning came at the launch of the latest edition of a report on organized crime published every four years by Europol that is compiled using data from police across the E.U. and will help shape law enforcement policy in the bloc in coming years. “Cybercrime is evolving into a digital arms race targeting governments, businesses and individuals. AI-driven attacks are becoming more precise and devastating,” said Europol Executive Director Catherine De Bolle. “Some attacks show a combination of motives of profit and destabilization, as they are increasingly state-aligned and ideologically motivated,” she added. The report, the E.U. Serious and Organized Crime Threat Assessment 2025, said offenses ranging from drug trafficking to people smuggling, money laundering, cyber attacks and online scams undermine society and the rule of law “by generating illicit proceeds, spreading violence, and normalizing corruption.” The volume of child sexual abuse material available online has increased significantly because of AI, which makes it more difficult to analyze imagery and identify offenders, the report said. “By creating highly realistic synthetic media, criminals are able to deceive victims, impersonate individuals and discredit or blackmail targets. The addition of AI-powered voice cloning and live video deepfakes amplifies the threat, enabling new forms of fraud, extortion, and identity theft,” it said. States seeking geopolitical advantage are also using criminals as contractors, the report said, citing cyber-attacks against critical infrastructure and public institutions “originating from Russia and countries in its sphere of influence.”
Telegram CEO Pavel Durov has returned home to Dubai, he said Monday, seven months after being arrested in France over charges that the platform was being used for criminal activity. “I’ve returned to Dubai after spending several months in France due to an investigation related to the activity of criminals on Telegram. The process is ongoing, but it feels great to be home,” Durov posted on his Telegram channel Monday. A spokesperson for the Paris prosecutor’s office told NBC News that Durov remains under investigation. “I want to thank the investigative judges for letting this happen, as well as my lawyers and team for their relentless efforts in demonstrating that, when it comes to moderation, cooperation, and fighting crime, for years Telegram not only met but exceeded its legal obligations,” Durov said. Durov, an enigmatic Soviet-born tech entrepreneur who has long claimed to be a champion of free speech, was arrested in Paris in August. The Paris prosecutor’s office said he had been detained as part of a larger investigation into the platform’s “complicity” in alleged crimes related to child sex abuse material (CSAM), among other accusations. Last fall, after being released by law enforcement but required to stay in France, Durov announced plans to “significantly improve“ Telegram’s response to criminals who abuse the platform. Headquartered in Dubai, Telegram is rare among global social media platforms for not having overt ties to either the United States or China. It’s particularly popular in the Middle East, eastern Europe and Russia, and in recent years has also become popular among some in the American far right.
OpenAI is asking the U.S. government to make it easier for AI companies to learn from copyrighted material, citing a need to “strengthen America’s lead” globally in advancing the technology. The proposal is part of a wider plan that the tech company behind ChatGPT submitted to the U.S. government on Thursday as part of President Donald Trump’s coming “AI Action Plan.” The administration solicited input from interested parties across the private sector, government and academia, framing the future policy as a shift that would “prevent unnecessarily burdensome requirements from hindering private sector innovation.” In its proposal, OpenAI urged the federal government to enact a series of “freedom-focused” policy ideas, including an approach that would no longer compel American AI developers to “comply with overly burdensome state laws.” Copyright in particular is an issue that has plagued AI developers, as many continue to train their models on human work without informing the original creators, obtaining consent or providing compensation. OpenAI has been sued by several news outlets including the Center for Investigative Reporting, The New York Times, the Chicago Tribune and the New York Daily News over claims of copyright infringement. Several authors and visual artists have also taken legal action against the company over unauthorized use of their copyrighted content. Still, OpenAI said it believes its strategy — the encouragement of “fair use” policies and fewer intellectual property restrictions — could “[protect] the rights and interests of content creators while also protecting America’s AI leadership and national security.” It did not elaborate on the former. Many leaders in the AI industry and members of the Trump administration have framed America’s dominance in AI advancements as a matter of national security, comparing it to a high-stakes arms race. “The federal government can both secure Americans’ freedom to learn from AI, and avoid forfeiting our AI lead to the PRC by preserving American AI models’ ability to learn from copyrighted material,” OpenAI’s proposal states, using an abbreviation for China’s formal name, the People’s Republic of China. Shortly after he took office, Trump issued an executive order that revoked former President Joe Biden’s policies on AI, stating the United States’ previous directives acted “as barriers to American AI innovation.” Biden’s “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” executive order, issued in October 2023, stated that “irresponsible use [of AI] could exacerbate societal harms,” including threats to national security.
Artificial intelligence technology is becoming increasingly integral to everyday life, with an Elon University survey finding that 52% of U.S. adults have used AI large language models like ChatGPT, Gemini, Claude and Copilot. The survey, conducted in January by the Imagining the Digital Future Center at the university in North Carolina, found that 34% of its 500 respondents who had used AI said they use large language models (LLMs) at least once a day. Most popular was ChatGPT, with 72% of respondents reporting they have used it. Google’s Gemini was second, at 50%. It has become increasingly common for people to find themselves developing personal relationships with AI chatbots. The survey found that 38% of users said they believe LLMs will “form deep relationships with humans,” and over half reported having had spoken conversations with chatbots. Around 9% of users said the main purpose they use the models for is “social kinds of encounters like casual conversation and companionship.” The respondents found that the models can express a variety of personality traits, including confidence, curiosity and even senses of humor. “These findings start to establish a baseline for the way humans and AI systems will evolve together in the coming years,” Lee Rainie, director of the Imagining the Digital Future Center, told NBC News in a statement. “These tools are increasingly being integrated into daily life in sometimes quite intimate ways at the level of emotion and impact. It’s clearly shaping up as the story of another chapter in human history.” That is consistent with an overall trend that found that 51% of respondents use LLMs for personal endeavors, rather than work-related activities. When it comes to using the models for work purposes, respondents reported that they have used them with work-related apps such as Slack, PowerPoint and Zoom. They have also used the models to do such things as write emails, research ideas and summarize documents. Over 50% of respondents said the models have helped them improve their productivity. Many respondents reported having anxieties about the technology. Sixty-three percent thought the models could replace a significant amount of human-to-human communication, and 59% thought they could cause a significant number of job losses. AI technology is becoming more popular as President Donald Trump’s administration has been pushing for increased investment in AI technology.
Meta’s upcoming Community Notes feature for monitoring misinformation through crowdsourcing will use some technology developed by Elon Musk’s X for its similar service. On Thursday, Meta revealed in a blog post more details of its new content moderation tool, and said it incorporates the same open-source algorithm that powers X’s Community Notes. Meta said that over time it plans to modify the algorithm to better serve its Facebook, Instagram and Threads apps. “As X’s algorithm and program information is open source — meaning free and available for anyone to use— we can build on what X has done, learn from the researchers who have studied it, and improve the system for our own platforms,” Meta said in the post. “As our own version develops, we may explore different or adjusted algorithms to support how Community Notes are ranked and rated.” Meta CEO Mark Zuckerberg pitched Community Notes in January as the company’s preferred replacement in the U.S. for third-party fact-checking, which he shuttered as part of a broader policy change that also relaxed certain content moderation guidelines. The company will begin testing Community Notes next week in the U.S. Last month, Meta said that users can apply to become contributors as long as they meet certain requirements, including being over 18 and having a verified phone number. Contributors will not be able to submit Community Notes on advertisements, but will be able to do so on “almost any other forms of content, including posts by Meta, our executives, politicians and other public figures,” the blog post said. Posts hit with Community Notes can’t be appealed, but there’s also no additional penalty for content that’s flagged. A community note on a Facebook post. A community note on a Facebook post.Meta “Notes will provide extra context, but they won’t impact who can see the content or how widely it can be shared,” the blog post said. Meta doesn’t plan to open source or publicly release more technical details about its Community Notes system, but is considering the option for the future, Rachel Lambert, director of product management at Meta, said in a media briefing. So far, about 200,000 people have signed up to become Community Notes contributors “and the waitlist remains open for those who wish to take part in the program,” the company’s blog post said. Neil Johnson, a George Washington University physics professor and expert in how misinformation and hate speech spread online, told CNBC in February that a Community Notes program can help provide context for online content, but is not a substitute for “formal fact-checking.” Johnson characterized a Community Notes model as an “imperfect system” that can potentially be exploited by large groups or organizations with their own agendas. Meta said in the blog post that “publishing a note requires agreement between different people,” a policy that helps “safeguard against organized campaigns attempting to game the system and influence what notes get published or what they say.” The company said the model will be expanded across the country “once we are comfortable from the initial beta testing that the program is working in broadly the way we believe it should, though we will continue to learn and improve it as we go.
President Donald Trump turned the South Lawn of the White House into a temporary Tesla showroom Tuesday in a conspicuous favor to his adviser Elon Musk, the car company’s billionaire CEO. Tesla delivered five of its vehicles to the White House and parked them on a driveway for Trump to personally inspect, hours after he said on Truth Social that he planned to buy a Tesla to demonstrate his support for Musk and for the slumping company. With Musk beside him, Trump declared the vehicles “beautiful” and in particular praised the company’s unusually designed Cybertruck. “As soon as I saw it, I said, ‘That is the coolest design,’” Trump said. Though Trump frequently attacked electric vehicles during last year’s campaign, he told reporters that he had heard good things about Teslas from his friends. He sat in the driver’s seat of a sedan, with Musk seated beside him, and said he planned to buy one. “The one I like is that one, and I want the same color,” he said, pointing to a red Model S. The vehicle is listed on the Tesla website for $73,490, or $88,490 for the all-wheel-drive Model S Plaid. He did not take a test drive but said he might “another time.”
NEW YORK — Michelle Obama and her brother, Craig Robinson, will host a new weekly podcast series starting this month featuring a special guest pulled from the world of entertainment, sports, health and business. "IMO with Michelle Obama & Craig Robinson" will address "everyday questions shaping our lives, relationships and the world around us," according to a press release. IMO is slang for "in my opinion." Some of the guests slated to speak to the former first lady and Robinson, executive director of the National Association of Basketball Coaches, include the actors Issa Rae and Keke Palmer and psychologist Dr. Orna Guralnik. Other guests include filmmakers Seth and Lauren Rogan; soccer star Abby Wambach; authors Jay Shetty, Glennon Doyle and Logan Ury; editor Elaine Welteroth; radio personality Angie Martinez; media mogul Tyler Perry; actor Tracee Ellis Ross; husband-and-wife athlete and actor Dwyane Wade and Gabrielle Union; and Airbnb CEO Brian Chesky. The first two episodes — the first is an introductory one and the second features Rae — will premiere Wednesday. New episodes will be released weekly and will be available on all audio platforms and YouTube. "With everything going on in the world, we’re all looking for answers and people to turn to," Michelle Obama said in a statement. "There is no single way to deal with the challenges we may be facing — whether it’s family, faith, or our personal relationships — but taking the time to open up and talk about these issues can provide hope." Michelle Obama has had two other podcasts — "The Michelle Obama Podcast" in 2020 and another in 2023, "The Light We Carry." Her husband, former President Barack Obama, offered a series of conversations about American life between him and Bruce Springsteen. The new podcast is a production of Higher Ground, the media company founded in 2018 by the former president and first lady.
Two Evangelical Christian leaders sent an open letter to President Trump on Wednesday, warning of the dangers of out-of-control artificial intelligence and of automating human labor. The letter comes just weeks after the new Pope, Leo XIV, declared he was concerned with the “defense of human dignity, justice and labor” amid what he described as the “new industrial revolution” spurred by advances in AI. “As people of faith, we believe we should rapidly develop powerful AI tools that help cure diseases and solve practical problems, but not autonomous smarter-than-human machines that nobody knows how to control,” reads the open letter, signed by the Reverends Johnnie Moore and Samuel Rodriguez. “The world is grappling with a new reality because of the pace of the development of this technology, which represents an opportunity of great promise but also of potential peril especially as we approach artificial general intelligence.”Rodriguez, the President of the National Hispanic Christian Leadership Conference, spoke at Trump’s first presidential inauguration in 2017. Moore, who is also the founder of the public relations firm Kairos, served on Trump’s Evangelical executive board during his first presidential candidacy. The letter is a sign of growing ties between religious and AI safety groups, which share some of the same worries. It was shared with journalists by representatives of the Future of Life Institute—an AI safety organization that campaigns to reduce what it sees as the existential risk posed by advanced AI systems. The world’s biggest tech companies now all believe that it is possible to create so-called “artificial general intelligence”—a form of AI that can do any task better than a human expert. Some researchers have even invoked this technology in religious terms—for example, OpenAI’s former chief scientist Ilya Sutskever, a mystical figure who famously encouraged colleagues to chant “feel the AGI” at company gatherings. The emerging possibility of AGI presents, in one sense, a profound challenge to many theologies. If we are in a universe where a God-like machine is possible, what space does that leave for God himself? “The spiritual implications of creating intelligence that may one day surpass human capabilities raises profound theological and ethical questions that must be thoughtfully considered with wisdom,” the two Reverends wrote in their open letter to President Trump. “Virtually all religious traditions warn against a world where work is no longer necessary or where human beings can live their lives without any guardrails.” Though couched in adulatory language, the letter presents a vision of AI governance that differs from Trump’s current approach. The president has embraced the framing of the U.S. as in a race with China to get to AGI first, and his AI czar, David Sacks, has warned that regulating the technology would threaten the U.S.’s position in that race. The White House AI team is stacked with advisors who take a dismissive view of alignment risks—or the idea that a smarter-than-human AI might be hostile to humans, escape their control, and cause some kind of catastrophe.We believe you are the world’s leader now by Divine Providence to also guide AI,” the letter says, addressing Trump, before urging him to consider convening an ethical council to consider not only “what AI can do but also what it should do.” “To be clear: we are not encouraging the United States, and our friends, to do anything but win the AI race,” the letter says. “There is no alternative. We must win. However, we are advising that this victory simply must not be a victory at any cost.” The letter echoes some themes that have increasingly been explored inside the Vatican, not just by Pope Leo XIV but also his predecessor, Pope Francis. Last year, in remarks at an event held at the Vatican about AI, Francis argued that AI must be used to improve, not degrade, human dignity. “Does it serve to satisfy the needs of humanity, to improve the well-being and integral development of people?” he asked. Or does it “serve to enrich and increase the already high power of the few technological giants despite the dangers to humanity?”