News

Bright Lights, Bot City: Having A.I. Plan a Dream Trip to New York

A “Friday evening matinee?” To quote the Gershwins, it ain’t necessarily so. But that’s how modern artificial intelligence suggested I hit Broadway. When I was asked to see what A.I. gets right and wrong about visiting New York City, I was deeply curious and felt well qualified for the assignment — I’ve been a resident of Manhattan since 1989, a frequent city tour guide for friends and family, and a journalist who has written about technology (including chatbots) since the 1990s. I sampled several A.I.-planner sites with the same vacation request: Create an itinerary for a trip for two people to New York City from April 17 to 20 that suggests an affordable hotel (less than $250 a night) in the middle of the city, several iconic landmarks or museums, a matinee performance of an award-winning Broadway show and a great pizza stop. I asked for directions for accessible ways to get to each place from the hotel, and then made additional requests for suggestions if children were coming along. While most of the sites offered many of the same classic New York spots, like the Museum of Modern Art, the user experience varied. (Note that all the sampled sites use OpenAI’s software in some way and The Times has an active copyright-infringement lawsuit against OpenAI.) If you are new to the world of A.I. travel planners, here are a few that may appeal to certain types of human travel planners. If you want a friendly interface With its energetic home page full of photos and features, Mindtrip (free) felt like the most welcoming A.I. planner for a newcomer. Its initial itinerary hit most of the top tourist stops, like the Statue of Liberty, the Metropolitan Museum of Art and Central Park, with links to the sites’ suggested highlights. Mindtrip also suggested the Pod 51 hotel on East 51st Street (about $303 a night), which is a great location, but rooms in the Pod chain aim for “chic minimalist,” which may not be for everyone, particularly families. The Good: Manhattan sights tends to dominate the list, but Mindtrip suggested going over the Brooklyn Bridge for photo ops and Grimaldi’s Pizza — so points for getting to a second borough. The Bad: The schedule for the third day suggested visiting the Statue of Liberty and the Empire State Building in the morning — and then going to the matinee of “Hamilton.” That seemed unrealistic with timed tours and travel across the city, especially since the Saturday matinee starts at 1 p.m. Swapping the suggested stroll around Rockefeller Center and Times Square from another day in the itinerary made more logistical sense. The Unexpected: When asked for a “hidden gem” to visit, it proposed the Tenement Museum, which reveals a century of New York City history through the experiences of its immigrants. If you require details up front Vacay, (free; $10 a month, for premium plan) another web-based chatbot and planner, had a more text-heavy but clean interface and suggested several of the same city landmarks with relevant links. For those unsure about how to ask for information, the site has a helpful best practices guide for writing A.I. prompts to get the best results. Vacay’s premium plan, designed for frequent travelers, offers more enhanced A.I. models for more specific recommendations, tech support and advice on planning themed vacations. The Good: While it lacked its own maps in the chat window, Vacay’s itinerary planner had more precise advice, not just suggesting Central Park, but recommending Bethesda Terrace and Strawberry Fields within it. And it also named specific bus and subway lines to get to the destinations without requiring a separate request, based on the location of its suggested Pod 39 hotel on East 39th Street (about $290 a night). You can download your chat transcripts, even in the free plan. The Bad: The Vacay bot suggested a “Friday evening matinee” of a Broadway show. The Unexpected: The site advised visiting Top of the Rock for city views, which allows you to include the Empire State Building in your photos, so points for considering the skyline-selfie experience. If ChatGPT is used for everything The popular and pioneering ChatGPT (free; paid plans start at $20 a month for advanced features, like the new Deep Research tool) also recommended staying at the Pod 51 hotel; the Pod people have clearly had an influence on the Bot people. The Good: ChatGPT made sensible plans for multiple activities in the same part of the city, like grouping a morning visit to the Statue of Liberty and Ellis Island with an afternoon stop at the 9/11 Memorial & Museum. The Bad: ChatGPT also suggested a Friday matinee for several Broadway shows, despite the fact that Friday is not a matinee day for any of them. Some predicted walking times were impractical — hoofing it from the theater district to Joe’s Pizza on Carmine Street takes much longer than seven minutes; perhaps it really meant the Joe’s near Broadway and 40th Street. The Unexpected: A stroll on the High Line and a visit to Chelsea Market popped up as a suggestion. Which come to think of it, would be very nice on a spring day. If a trusted travel site is vital If you’d prefer to stick with a familiar brand, 25-year-old Tripadvisor is among those offering A.I.-planning help. To build a trip, you just answer a few questions about what you want to do and Tripadvisor presents a screen full of menu choices. Click the desirable options and the site builds a trip schedule. Among the hotel suggestions: the Pod Times Square on West 42nd Street (around $259 a night), leading me to believe if you have “affordable hotel” in your N.Y.C. request, travelbots will suggest a Pod. The Good: Tripadvisor had the best ideas for children, including a stop at the Hayden Planetarium and the Wonderland-inspired Alice’s Tea Cup restaurant. The Bad: The site suggested the Alice’s location on the east side of Central Park instead of the one near the planetarium on the west side. The Unexpected: Tripadvisor, which has a huge repository of user-generated reviews, switched up some of the pizza recommendations to include Don Antonio and Capizzi along with the usual John’s and Joe’s stops. Tripadvisor also had the most cheerful disclaimer: “A.I. isn’t perfect, but it’ll help you hit the ground running.” Every A.I. travel-planner tested here (along with others out there, including Layla, Wonderplan and the mobile-friendly GuideGeek) warn you that the information you get from them may not be correct. Take this to heart and double-check all of it. Another tip: If you’ve never used an A.I. travel planner before, keep in mind that asking for everything in one big query can lead to some muddled responses. Start with the basic outline of the trip, like finding a hotel in a certain area for specific dates, and then ask about local attractions, transit directions, restaurant recommendations and other information in subsequent requests to build out your itinerary. While A.I. planners are still mostly used for research and planning, autonomous A.I. agents like OpenAI’s Operator could soon be booking your trips as well, and you’ll really want to make sure that itinerary is correct.

Not a Coder? With A.I., Just Having an Idea Can Be Enough

I am not a coder. I can’t write a single line of Python, JavaScript or C++. Except for a brief period in my teenage years when I built websites and tinkered with Flash animations, I’ve never been a software engineer, nor do I harbor ambitions of giving up journalism for a career in the tech industry. And yet, for the past several months, I’ve been coding up a storm. Among my creations: a tool that transcribes and summarizes long podcasts, a tool to organize my social media bookmarks into a searchable database, a website that tells me whether a piece of furniture will fit in my car’s trunk and an app called LunchBox Buddy, which analyzes the contents of my fridge and helps me decide what to pack for my son’s school lunch. These creations are all possible thanks to artificial intelligence, and a new A.I. trend known as “vibecoding.” Vibecoding, a term that was popularized by the A.I. researcher Andrej Karpathy, is useful shorthand for the way that today’s A.I. tools allow even nontechnical hobbyists to build fully functioning apps and websites, just by typing prompts into a text box. You don’t have to know how to code to vibecode — just having an idea, and a little patience, is usually enough. “It’s not really coding,” Mr. Karpathy wrote this month. “I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.” My own vibecoding experiments have been aimed at making what I call “software for one” — small, bespoke apps that solve specific problems in my life. These aren’t the kinds of tools a big tech company would build. There’s no real market for them, their features are limited and some of them only sort of work. But building software this way — describing a problem in a sentence or two, then watching a powerful A.I. model go to work building a custom tool to solve it — is a mind-blowing experience. It produces a feeling of A.I. vertigo, similar to what I felt after using ChatGPT for the first time. And it’s the best way I’ve found to demonstrate to skeptics the abilities of today’s A.I. models, which can now automate big chunks of basic computer programming, and may soon be capable of similar feats in other fields. A.I. coding tools have existed for years. Earlier ones, like GitHub Copilot, were designed to help professional coders work faster, in part by finishing their lines of code the same way that ChatGPT completes a sentence. You still needed to know how to code to get the most out of them, and step in when the A.I. got stuck. But over the past year or two, new tools have been built to take advantage of more powerful A.I. models that enable even neophytes to program like pros. These tools, which include Cursor, Replit, Bolt and Lovable, all work in similar ways. Given a user’s prompt, the tool comes up with a design, decides on the best software packages and programming languages to use, and gets to work building a product. Most of the products allow limited free use, with paid tiers that unlock better features and the ability to build more things. To a non-programmer, vibecoding can feel like sorcery. After you type in your prompt, mysterious lines of code fly past, and a few seconds later, if everything goes well, a working prototype emerges. Users can suggest tweaks and revisions, and when they’re happy with it, they can deploy their new product to the web, or run it on their computers. The process can take just a few minutes, or as long as several hours, depending on the complexity of the project. Here’s what it looked like when I asked Bolt to build me an app that could help me pack a school lunch for my son, based on an uploaded photo of the contents of my fridge. The app first analyzed the task and broke it down into component parts. Then it got to work. It generated a basic web interface, chose an image recognition tool to identify the foods in my fridge and developed an algorithm to recommend meals based on those items. If the A.I. needed me to make a decision — whether I wanted the app to list the nutritional facts of the foods it was recommending, for example — it prompted me with several options. Then it would go off and code some more. When it hit a snag, it tried to debug its own code, or backed up to the step before it had gotten stuck and tried a different method. Advertisement SKIP ADVERTISEMENT Roughly 10 minutes after I had entered my prompt, LunchBox Buddy — which is what the A.I. had decided to call my app — was ready. It suggested a generic turkey sandwich. You can try it for yourself here. (The version I built incorporates an A.I. image recognition tool that costs money to use; for this public web version, I’ve replaced it with a simulated image recognition feature so I don’t rack up a huge bill.)

Amazon Unveils Alexa+, Powered by Generative A.I.

Amazon’s Alexa is undergoing its biggest overhaul since debuting more than a decade ago. On Wednesday, Amazon said it was giving Alexa a new brain powered by generative artificial intelligence. The update, called Alexa+, is set to make the virtual assistant more conversational and helpful in booking concert tickets, coordinating calendars and suggesting food to be delivered. Alexa+ will cost $19.99 a month or be included for customers who pay for Amazon’s Prime membership program, which costs $14.99 a month. It will begin rolling out next month. “Until right this moment, right this moment, we have been limited by the technology,” Panos Panay, the head of Amazon’s devices, said at a media event. “Alexa+ is that trusted assistant that can help you conduct your life and your home.” With the changes, Amazon is aiming to catch up in generative A.I. for everyday users. While the Seattle company has in recent months made up for lost time in A.I. products and services that it sells to businesses and other organizations, its grip on consumer A.I. products has been narrower. Alexa’s upgrades, which were first teased in 2023, are Amazon’s biggest bet on becoming a force in consumer A.I. The moves are also an opportunity to reboot Alexa, which has been perceived as having fallen behind other virtual assistants. In recent years, Alexa’s growth in the United States has generally stagnated, according to the research firm Consumer Intelligence Research Partners, with people turning to the assistant for only a few main tasks, such as setting timers and alarms, playing music and asking questions about the weather and sports scores. At Wednesday’s event, Mr. Panay and other Amazon executives demonstrated how Alexa+ could do those things in a more personalized manner. Alexa+ could identify who was speaking and know the person’s preferences, such as favorite sports teams, musicians and foods, they said. They also showed how a device powered by Alexa+ could suggest a restaurant, book a reservation on OpenTable, order an Uber and send a calendar invitation. Alexa, which was a brainchild of Jeff Bezos, Amazon’s founder, debuted in 2014, wowing people with its ability to take verbal requests and translate them into actions. It became a symbol of Amazon’s innovation. Over the years, the company has highlighted some Alexa-connected devices, including Echo speakers, a connected microwave, a wall clock and a twerking teddy bear. But wild experimentation has been out since Mr. Bezos stepped down as Amazon’s chief executive in 2021 and handed the company over to Andy Jassy, a longtime executive. Mr. Jassy reined in Amazon’s expenses, killed some projects that appeared to have no obvious prospects and oversaw layoffs. In 2023, he hired Mr. Panay, a Microsoft executive, to oversee devices. Mr. Panay’s top responsibility was to bring generative A.I. to Alexa and to unlock the promise of the all-helpful assistant that Amazon had long envisioned. Soon after Mr. Panay started, Amazon said it was rebuilding Alexa’s brain with the kind of technologies that underpinned OpenAI’s ChatGPT chatbot. “The re-architecture of all of Alexa has happened,” Mr. Panay said on Wednesday. As Amazon worked to update Alexa, competitors leapfrogged it. ChatGPT, for example, can hold extended, in-depth conversations, with some people developing emotional — and even sexual — relationships with A.I. personas. (The New York Times has sued OpenAI and its partner, Microsoft, claiming copyright infringement of news content related to A.I. systems. The companies have denied the claims.) Bringing generative A.I. to Alexa was not easy because the virtual assistant faces challenges that a chatbot does not. Alexa might serve multiple users in a household, for instance, so it needs to distinguish who is speaking and personalize the responses. Amazon also wants Alexa to be at the center of people’s lives and connected to multiple smart devices and services, which is complicated. It must integrate multiple A.I. systems, including ones built by Amazon and the start-up Anthropic, and interact with devices such as smart lightbulbs and with apps including Ticketmaster. Amazon also gave Alexa+ a personality, even training it with comedians to make it funny. “In the fall, it was just too slow,” Mr. Panay said in an interview. Generative A.I. has also been afflicted by “hallucinations,” or when the A.I. systems serve up incorrect information. Because Alexa interacts with the real world — playing a song, ordering a product, turning off an alarm — Mr. Panay said Alexa had to reliably get things right. He said he believed Alexa+ was finally both fast and accurate. “I think people will fall in love with it pretty quickly,” he said.

The Agony of Adoring Online Dogs

Norbert was practically a stuffed animal come to life. The three-pound mixed-breed internet-famous therapy dog dispensed joy simply by existing. Julie Steines started posting photos of Norbert on Instagram more than a decade ago: of him volunteering at children’s hospitals, nursing homes and schools; of him dressed as a wizard or a reindeer, wearing a beanie or a tie. His tiny pink tongue hung out of his mouth much of the time. Soon you could buy plush toys in his likeness, with profits going to charity. His mission as a therapy dog, according to his website, was simple: “to spread smiles, inspire kindness and bring comfort to those in need.” It turns out that I, along with many of his nearly one million followers on Instagram alone, was among those in need. Any time I felt blue, I’d seek out his page for an infusion of happiness. And when I saw him pop up in my feed at random, a wave of endorphins flooded my brain. Advertisement SKIP ADVERTISEMENT When Norbert died last week, just shy of 16 years old, tens of thousands of comments and tributes poured in. “My family is heartbroken,” Steines wrote as part of a lengthy announcement. Pet content remains one of the last bastions of joy on social media. Norbert and many other beloved online dogs — all blissfully unaware of their internet fame, or the internet at all — cut through a digital landscape growing less hospitable by the day. As petty fights and bizarre bots increasingly overwhelm online spaces, I find myself following more dogs and fewer people. Instagram turns 15 years old this year, as will my oldest pup at home. When introduced, the platform, with its focus on photos and videos, elevated pet content to greater heights than any service that came before. It didn’t take long for Instagram to become populated with accounts dedicated to dogs — personal pages where the dogs were not the sidekicks but the stars, their humans the accessories. These accounts would often be verified, like those of celebrities and politicians. There’s something distinct and humbling about forming a parasocial relationship with, and experiencing heartbreak from, an animal you’ve never met. Now that many of us have been on social media for a decade or more, it’s becoming impossible to not brace for the inevitable. And when these animals “cross the rainbow bridge,” as it’s said, I’ve scrambled to place my sadness as the families behind the pets come into focus, as does their grief, usually in a heart-wrenching caption. When Henry the Colorado Dog died suddenly in 2022, leaving his best friend, a cat named Baloo, grief-stricken, I was inconsolable. Their page, with 2.3 million followers, had been a celebration of cinematic adventures: Henry and Baloo cuddling in a tent in the Rocky Mountains or floating in a boat on a river at sunset. With Henry gone, Baloo stopped eating and was floundering. Then I watched a triumphant story arc unfold as his family found Pan, a new canine companion who is as intrepid as Henry was and who’d go on to bond deeply with Baloo, softening the hurt but not replacing Henry, whose memory remains a strong presence on the page. When Kabosu, the Shiba Inu who helped define the Doge meme, died last year at the age of 18, The New York Times published a proper obituary. When Bodhi, a Shiba Inu known online simply as the Menswear Dog and who modeled for Coach, died last year at age 15, he, too, was the subject of an article. This inclination to honor such losses more officially can in part be attributed to the novelty of celebrity, but human’s best friend seems to have taken on greater personal and cultural significance in general in recent years. Most dog owners consider their pets family, and some are even seeking ways to foster richer interspecies communication. Policies that allow workers time off to care for a sick pet or to grieve the loss of one are also gaining steam. As is sometimes the case with the pages of notable people who die, the accounts of pets often endure, but forever with an asterisk. Though unlike these people — whom fans can honor by watching their movies, reading their books, listening to their music — the photos and videos taken of dogs are the “art” that they offered, and social media the stage on which they were admired. At least for a while, I imagine the sight of Norbert’s pricked gray ears and black button nose will cause only sadness. But eventually, I will return ready to delight again in his soul-restoring magic, which, at least for me and those who never knew him in real life, is the gift he always provided.

The Best True Crime to Stream: TikTok Dreams to Nightmares

TikTok continues to be on shaky ground in the United States. Earlier this month, the Supreme Court upheld a law passed by Congress last year that required a ban of the Chinese-owned app unless it was sold to a government-approved buyer. Hours before the law took effect, TikTok went dark briefly, then flickered back to life when President Trump, a day before his inauguration, indicated support for the app. He then signed an executive order stalling the ban for 75 days. Whether the app will disappear for good is unclear, but in the meantime, here are four true-crime stories associated with TikTok — the most downloaded app in the United States and the world in 2020, 2021 and 2022 — that captured broader attention. It’s of course no secret that the glossy dance videos that have populated TikTok since its inception, along with much online content, is more fantasy than reality. But that’s little comfort to the revelations uncovered in this 2024 Netflix series. “Dancing for the Devil” primarily spends time with dancers who were managed by the talent company 7M Films and were members of Shekinah Church — both entities founded and led by Pastor Robert Shinn — as well as desperate family members of those still involved with 7M. These families claim that their loved ones are essentially trapped. Shinn created 7M to seemingly help TikTok dancers and aspiring influencers elevate their status. The dancers we hear from claim that 7M is a cult and that Shinn is an abusive cult leader. Accusations include those of fraud, labor violations, extortion, grooming and assault. (Shinn did not participate in the series and denies wrongdoing.) “Dancing for the Devil” falls into a category of true crime that does less looking back and instead documents a situation that continues to unfold. Our film critic commended the three-part series for not rushing the narrative, calling it “daring, instructive, thoughtful and moving.” Last year I wrote about how true-crime storytellers used to have little in the way of real-time first-person footage to rely on. Now, as much of our daily lives are documented, the genre has transformed. And there has never quite been a trail of damning video and audio evidence as there was with this case — told in this 2024 Peacock documentary — about the 2021 murders of Ana Abulaban and Rayburn Barron, who were killed by Ana’s estranged husband, Ali Abulaban. Ali was a TikTok star who, under the username JinnKid, gained prominence and millions of followers with his comedic Skyrim and “Scarface” impressions. He recorded much of his life on his phone, and as his and Ana’s marriage unraveled, he broadcast their fights live, dissolving the perfect image they had projected online. He even recorded audio during the moment of the murders, and neighbors’ doorbell cameras in their luxury San Diego high-rise captured the aftermath. This is a story of domestic violence, jealousy and addiction, and of how a fixation on social-media fame can warp reality beyond repair. Each episode of this Investigation Discovery series, which debuted last year and is streaming on Max and Hulu, examines a different crime connected to the underbelly of social media. Here we learn about Sania Khan, a photographer and Pakistani American influencer whose TikTok following swelled when she started to speak candidly about her split from her husband, Raheel Ahmad, after a tumultuous and abusive marriage. Confessional-type content is everywhere on social media, but for Khan, airing out her private life was particularly brave because of the conservative South Asian and Muslim communities of which she was part — cultures that expect women to maintain the status quo and put their family’s reputation first. While scores of women celebrated her candor and commiserated with her pain in the comments, there was also a brutal backlash from those who thought her posts were shameful, and proceeded to harass, bully and threaten her. When she was just hours from starting a new chapter in her life, the worst happened. This episode is particularly poignant because Khan’s story is largely told through her closest friends, who focus on her effervescent personality and her mission to modernize her culture, push past taboos and reclaim her identity.

Fitbit Agrees to Pay $12 Million for Not Quickly Reporting Burn Risk With Watches

Reports that Fitbit’s Ionic smartwatch was overheating began in 2018 and continued into 2020. But according to U.S. officials, the company did not quickly report, as the law requires, that the battery inside the watch was creating an unreasonable risk of serious injury or death to consumers. On Thursday, the U.S. Consumer Product Safety Commission announced that Fitbit had agreed to pay a $12.25 million civil penalty over its delay in reporting that the lithium-ion battery in the watch can overheat, creating a burn hazard. The commission noted that in early 2020, Fitbit had issued a firmware update to reduce the potential for battery overheating, as consumers continued to report suffering burns because of the watch. But Fitbit did not voluntarily recall the Ionic smart watch until March 2, 2022. By then, the commission said, Fitbit had received at least 174 reports globally of the lithium-ion battery’s overheating, leading to 118 reported injuries, including two cases of third-degree burns and four of second-degree burns. “Fitbit should have immediately reported numerous overheating incidents, including second- and third-degree burns,” Commissioner Rich Trumka Jr. said Thursday in a statement. “Instead, Fitbit broke the law by delaying its reporting, leaving consumers exposed to the burn hazard. Many of these injuries could have been prevented.” In a statement on Friday, a Fitbit spokesman said, “Customer safety continues to be our top priority, and we’re pleased to resolve this matter with the C.P.S.C. stemming from the 2022 voluntary recall of Fitbit Ionic.” About one million of the devices, which track activity, heart rate and sleep, were sold in the United States from September 2017 through December 2021, with an additional 693,000 sold globally. Fitbit said that the injury reports represented fewer than 0.01 percent of all Ionic watches sold. The company stopped production of the Ionic in 2020, according to the consumer commission. At the time of the 2022 recall, owners were offered $299 after returning their Ionic watches and received a discount code for select Fitbit devices, according to the consumer commission. As part of the settlement agreement, Fitbit agreed to submit an annual report, including updates on the effectiveness of its revamped compliance policies. Google bought Fitbit for $2.1 billion in early 2021 after agreeing not to use the health and wellness data that Fitbit had created to target ads at internet users. In 2014, Fitbit recalled more than a million of its Force wristbands after customers complained of severe skin irritation. But the company avoided a recall of its Flex wristbands later that year, after similar complaints, by adding a warning about nickel allergies and a sizing guideline to prevent users from wearing the wristbands too tightly.

Why You Might Suddenly Be Following Trump on Instagram and Facebook

On Tuesday, the day after the inauguration of President Donald J. Trump, many Instagram and Facebook users found themselves following him on the social media apps even though they had not signed up to do so. What gives? Meta, which owns Facebook and Instagram, said it was part of a regular process in which White House social media accounts are handed over when a new president takes office. It added that there were some other bugs in the process that may have mucked up the gears of the transition. Let’s walk through what happened. Why am I following Donald Trump, Melania Trump and JD Vance on Instagram and Facebook? Just as the federal government has to deal with the transition of power between administrations, Meta has to deal with it, too. For years, companies like Meta and X — previously known as Facebook and Twitter, respectively — have had to handle the social media accounts held by the office of the president as it changed hands after an election. That ramped up after Barack Obama took office in 2008 and fully embraced social media to garner support from voters digitally. By 2016, the companies needed to figure out how to hand those accounts off between administrations. Meta and X decided that the official POTUS, VP and first lady accounts on Facebook, Instagram and X would be switched to the new administration while retaining the existing followers of those accounts. That meant that if you followed President Obama in 2016, you were automatically switched over to follow President Trump when he took office in his first administration in 2017. Mr. Obama’s posts were archived under a different handle, while Mr. Trump’s account reset with none of Mr. Obama’s old posts attached. That transition occurred again in 2020, when Joseph R. Biden Jr. was elected and took over the official presidential account. On Monday, after Mr. Trump was sworn in, the switch occurred again. That’s why you may be seeing his posts in your feed now. But I swear I wasn’t following any presidential accounts before. Lots of people have said this week that they never followed Mr. Biden or Mr. Trump before and are sure they have been added as followers against their will. Meta said it wasn’t forcing people to follow Mr. Trump. “People were not made to automatically follow any of the official Facebook or Instagram accounts for the president, vice president or first lady,” Andy Stone, a Meta spokesman, said in a statement on Threads. “Those accounts are managed by the White House so with a new administration, the content on those pages changes.” One possible explanation: Four years between administrations is a long time and people can forget what accounts they signed up to follow. Editors’ Picks Kristen Stewart Thinks the Critics at Cannes Are Being Too Nice These Boomer Radicals in Vermont Just Want to Be ‘Good Progressives’ How to Manage Your Blood Sugar With Exercise When I try to unfollow these accounts, the apps won’t let me. What’s up with that? This is where it’s not you, it’s Meta. The company said it “may take some time for follow and unfollow requests to go through” as the account transitions occur. It is possible that the company is receiving such a high volume of unfollow requests during the transition that it is running into errors processing them all. Meta claims it will be sorted out soon, but declined to go into detail on why it was happening. Why am I seeing recommendations to follow President Trump’s and Vice President Vance’s accounts? This is another instance of a sweeping change at Meta. The company previously insisted that users did not want to see political content across its apps and had removed that type of content on Facebook, Instagram and Threads. That meant people saw fewer posts and accounts related to politicians and contentious social issues. It was Meta’s way of making its platform seem, well, a bit nicer. But this year, Mark Zuckerberg, Meta’s chief executive, did an about-face and started reinserting political content into people’s feeds. He and others at Meta said that was because they heard people wanted to see more political content again. The change was part of a larger shift at Meta to allow more types of posts and content to spread across its platform in the Trump era. You can change your settings in Facebook and Instagram to see fewer political posts. I’m also seeing people talk about censoring Democrats on social media. What is that about? Add this one to the list of Meta’s screw-ups. On Tuesday, people began noticing that they could not search for posts that included the hashtag “#democrats” on some of Meta’s apps. That, along with the new Trump administration and Mr. Zuckerberg’s recent embrace of Mr. Trump, led people to believe that the company was forcing posts from Democrats out of their apps. Not true, Meta said, adding it had made an unfortunate error that it was working quickly to fix. Mr. Stone said that because of the error, users were unable to search for a gamut of topics and the mistake was affecting “not just those on the left.”

How to Create a Multimedia Digital Journal of Your Life

Still looking for a New Year’s resolution for self-improvement? Consider keeping a journal, which studies have shown might help with one’s mental well-being and anxiety issues, while also providing a creative outlet for personal expression. Handsome paper-based diaries and notebooks are available if you want to go the screen-free sensory route, but if you prefer a more multimedia approach to journaling, wake up your phone. Free apps that come with Apple’s iOS software and Google’s Android system allow you to add photos, audio clips and more to corral your thoughts — and set up electronic reminders to write regularly. Here’s an overview. Getting Started Keeping a digital diary requires a few basic steps: picking an app, writing an entry and adding new posts on a regular basis. And don’t let the fear of typing long contemplative dispatches on a small screen dissuade you. Just dictate your thoughts to your iPhone or Android phone with its transcription tools, although check its privacy policy if you’re nervous about your data. Using Apple’s Journal Apple released its Journal app in December 2023 and added new features last year in its iOS 18 update, including the ability to print entries. (The app is not yet available for the iPad.) To set it up, just find the Journal icon on your home screen or in the App Library, open it and follow the onscreen instructions. To compose a journal entry, tap the plus icon (+) at the bottom of the screen and select the New Entry button at the top of the next screen or under a suggested topic. Go to the text field to title your entry and start writing — or tap the microphone icon at the bottom corner of the keyboard to dictate. In the row of icons above the keyboard, you can format the text with bold, italic or other styles; get more topic suggestions; add photos from the library or the camera; add an audio recording; and note your location. You can describe your current mood with the State of Mind screen, which can be shared with the Health app (if you allow it). With your permission, the app shows you a list of topic suggestions drawn from your photos, locations and activities. You can turn off the suggestions by opening the iPhone’s Settings icon, selecting Apps, choosing Journal and tapping the button next to Skip Journaling Suggestions. While you’re in the Journal settings, you can set other controls, like requiring Face ID, Touch ID or a passcode to unlock it or backing up your entries online to iCloud. You can also set up a schedule for journaling and enable notifications nudging you to write. You can bookmark and edit your compositions by tapping the three-dot menu icon in each entry’s lower-right corner. The Journal app has a search function for looking up older entries if you don’t feel like scrolling back in time. Using Google Keep Google has yet to release a similar dedicated journaling app, but its 12-year-old Google Keep can do the job, organizing notes, audio clips, web pages, photos and drawings. To use it, you need a Google account and the Keep app. The app is available for Android and iOS (including the iPad), and Keep content is backed up online, where it can be viewed in a web browser. Once you’ve installed the Keep app, open it and tap the plus button (+) in the bottom-right corner to start an entry. Using the icons at the bottom of the text-entry screen allows you to do things like add a photo or give the entry a background color. Creating and adding a “journal” label filters your posts from other notes or lists you may use within the app. And while Keep, unlike Apple’s Journal, can’t pepper you with suggestions, you can ask Google’s Gemini or your favorite artificial intelligence assistant for topic ideas. Other Options Samsung Galaxy users have the Samsung Notes app as another diary option, and keeping a journal on one of the company’s pen-based tablets recreates the pen-to-paper vibe for the electronic age. If you want a journal app with additional features (like automatically adding the day’s weather conditions), you have plenty of other choices, but you’ll probably need to pay for the premium product. Among the many apps that work on most platforms are Day One (about $3 a month), Diarium ($10 to buy) and the ambitious, A.I.-powered Reflectary (about $7 a month). Journal apps make it easier to write about your life without the performative aspect of social media. And paying less attention to what everyone else is doing gives you more time to spend on yourself.

Fable, a Book App, Makes Changes After Offensive A.I. Messages

Fable, a popular app for talking about and tracking books, is changing the way it creates personalized summaries for its users after complaints that an artificial intelligence model used offensive language. One summary suggested that a reader of Black narratives should also read white authors. In an Instagram post this week, Chris Gallello, the head of product at Fable, addressed the problem of A.I.-generated summaries on the app, saying that Fable began receiving complaints about “very bigoted racist language, and that was shocking to us.” He gave no examples, but he was apparently referring to at least one Fable reader’s summary posted as a screenshot on Threads, which rounded up the book choices the reader, Tiana Trammell, had made, saying: “Your journey dives deep into the heart of Black narratives and transformative tales, leaving mainstream stories gasping for air. Don’t forget to surface for the occasional white author, okay?” Fable replied in a comment under the post, saying that a team would work to resolve the problem. In his longer statement on Instagram, Mr. Gallello said that the company would introduce safeguards. These included disclosures that summaries were generated by artificial intelligence, the ability to opt out of them and a thumbs-down button that would alert the app to a potential problem. Advertisement SKIP ADVERTISEMENT Ms. Trammell, who lives in Detroit, downloaded Fable in October to track her reading. Around Christmas, she had read books that prompted summaries related to the holiday. But just before the new year, she finished three books by Black authors. On Dec. 29, when Ms. Trammell saw her Fable summary, she was stunned. “I thought: ‘This cannot be what I am seeing. I am clearly missing something here,’” she said in an interview on Friday. She shared the summary with fellow book club members and on Fable, where others shared offensive summaries that they, too, had received or seen. One person who read books about people with disabilities was told her choices “could earn an eye-roll from a sloth.” Another said a reader’s books were “making me wonder if you’re ever in the mood for a straight, cis white man’s perspective.” Mr. Gallello said the A.I. model was intended to create a “fun sentence or two” taken from book descriptions, but some of the results were “disturbing” in what was intended to be a “safe space” for readers. Filters for offensive language and topics failed to stop the offensive content, he added. Fable’s head of community, Kim Marsh Allee, said in an email on Friday that two users received summaries “that are completely unacceptable to us as a company and do not reflect our values.” She said all of the features that use A.I. were being removed, including summaries and year-end reading wraps, and a new app version was being submitted to the app store. The use of A.I. has become an independent and timesaving but potentially problematic voice in many communities, including religious congregations and news organizations. With A.I.’s entry in the world of books, Fable’s action highlights the technology’s ability, or failure, to navigate the subtle interpretations of events and language that are necessary for ethical behavior. It also asks to what extent employees should check the work of A.I. models before letting the content loose. Some public libraries use apps to create online book clubs. In California, San Mateo County public libraries offered premium access to the Fable app through its library cards. Apps, including Fable, Goodreads and The StoryGraph, have become popular forums for online book clubs, and to share recommendations, reading lists and genre preferences. Some readers responded online to Fable, saying they were switching to other book-tracking apps or criticizing the use of any artificial intelligence in a forum meant to celebrate and amplify human creativity through the written word. “Just hire actual, professional copywriters to write a capped number of reader personality summaries and then approve them before they go live. 2 million users do not need ‘individually tailored’ snarky summaries,” one reader said in reply to Fable’s statement. Another reader who learned on social media about the controversy pointed out that the A.I. model “knew to capitalize Black and not white” but still generated racist content. She added that it showed some creators of A.I. technology “lack the deeper understanding of how to apply these concepts toward breaking down systems of oppression and discriminatory perspectives.” Mr. Gallello said that Fable was deeply sorry. “This is not what we want, and it shows that we have not done enough,” he said, adding that Fable hoped to earn back trust. After she received the summary, Ms. Trammell deleted the app. “It was the presumption that I do not read outside of my own race,” she said. “And the implication that I should read outside of my own race if that was not my prerogative.”