Top 10 AI News
OpenAI’s Sora app joins Meta in pushing AI-generated videos; experts worry about ‘AI slop’
By
|OpenAI has launched a new app called Sora, targeting short-form video platforms like TikTok and Instagram. Sora lets users generate AI-powered videos of themselves in imaginative scenarios (anime or hyper-realistic styles). It was released in the U.S. and Canada shortly after Meta unveiled its own AI video feature. Experts have raised alarms about a flood of such AI-generated clips — dubbed “AI slop” — saying it could overwhelm authentic content and mislead viewers online. OpenAI says it will monitor usage and adjust the experience (prioritizing familiar content) to avoid negative effects like excessive doomscrolling.
‘AI actor’ Tilly Norwood stirs outrage in Hollywood
By
|The debut of Tilly Norwood — an AI-generated “actress” created by Dutch producer Eline Van der Velden — has caused a firestorm in Hollywood. Promoted as a real acting talent, Norwood’s arrival drew sharp criticism from actors and guilds. The Screen Actors Guild (SAG-AFTRA) and performers like Melissa Barrera and Natasha Lyonne have condemned AI “actors,” asserting that acting requires human experience and emotion. The backlash highlights fears that AI could replace creative labor. Van der Velden argues that AI characters are a new art form, but many in the industry worry this trend could undermine authenticity. Despite the controversy and calls for boycotts, Norwood’s Instagram (showing her in everyday activities) continues to gain followers.
Amazon unveils new generation of AI-powered devices, including revamped Kindle
By
|Amazon revealed its latest AI-enhanced devices under the Alexa+ platform. Highlights include redesigned Kindle Scribes, lighter and faster for reading and note-taking. New Echo devices (Dot Max, Studio, Show 8 and 11) now provide personalized insights and recognize individual users. Fire TV units gain smarter AI search and real-time content recommendations. Ring’s video doorbells feature upgraded 2K/4K cameras with “Familiar Faces” recognition and a pet-finding tool (“Search Party”). All these products use subtle background AI to fit seamlessly into daily life. The updates, presented by Amazon Devices chief Panos Panay in New York, come amid rising AI competition from tech rivals.
OpenAI’s ChatGPT now lets users buy from Etsy, Shopify in push for chatbot shopping
By
|OpenAI announced that ChatGPT users can now purchase items directly from Shopify and Etsy through the chatbot. The new “Instant Checkout” feature (built with Stripe) allows merchants to sell products within ChatGPT interactions. Listings are fairly ranked by price, availability and seller quality without favoritism. This integration aims to open a new revenue stream for OpenAI (which still operates at a loss) by tapping into e-commerce fees. It also puts ChatGPT into competition with tech giants Amazon and Google in the online shopping space.
California Gov. Gavin Newsom signs landmark bill creating AI safety measures
By
|Governor Gavin Newsom has approved a landmark California law to regulate powerful AI systems. Starting Sept. 29, companies developing high-compute AI (like large language models) must implement and publicly disclose safety protocols to guard against misuse (for example, developing bioweapons or cyber sabotage). AI developers must monitor systems, report any safety incidents within 15 days, and face fines up to $1 million for violations. The law (authored by State Sen. Scott Wiener) includes whistleblower protections and a public research cloud for transparency. Exemptions ease requirements for startups to encourage innovation. The measure defines “catastrophic risk” as incidents causing over $1 billion in damage or 50+ deaths. Some tech groups argued for federal regulation instead, but companies like Anthropic supported the balanced approach. Newsom said California must lead on AI safety amid federal inaction.
Lufthansa Group to cut 4,000 jobs by 2030 with help of AI, sees stronger profits
By
|Lufthansa announced it will reduce about 4,000 jobs and speed up the retirement of older aircraft by 2030, using AI and digital tools to streamline operations. The airline group plans to automate many back-office and administrative tasks with machine learning, boosting efficiency and cuts costs. Despite the job cuts, Lufthansa predicts stronger profits ahead as fuel savings and more modern planes offset staff reductions. The company emphasized that no pilot or cabin crew jobs will be cut by this plan. CFO Peter Gerber said investing in AI and modern technology is necessary to remain competitive in a more automated airline industry.
Nvidia to invest $100 billion in OpenAI to expand ChatGPT maker’s computing power
By
|Chipmaker Nvidia announced a massive partnership to bolster OpenAI’s computing capacity. Nvidia will invest $100 billion to add at least 10 gigawatts of AI data center power. This means building out thousands of Nvidia-powered AI server clusters just for OpenAI, the company behind ChatGPT. The investment reflects the enormous hardware demands of large AI models. Analysts say the deal will help OpenAI scale future versions of ChatGPT and other AI tools, keeping pace with rising user demand and allowing rapid development of even more capable AI systems.
Nvidia to invest $5 billion in struggling rival Intel
By
|Nvidia, the world’s largest AI chipmaker, announced it will invest $5 billion in competitor Intel. Nvidia will purchase Intel shares at about $23.28 each, acquiring a stake worth roughly 6%. The U.S. government had recently bought a small Intel stake to boost domestic chipmaking; Nvidia’s investment follows this move. Intel, long the leader in PC processors, has struggled to keep up in recent years. Nvidia’s infusion of capital and confidence could help Intel speed up its chip development. Intel CEO Pat Gelsinger said the partnership will allow the companies to collaborate on future technologies, including AI, benefitting both U.S. competitiveness and their businesses.
California attorney fined $10,000 for filing an appeal with fake legal citations generated by AI
By
|A lawyer in California was ordered to pay a $10,000 fine after submitting a federal court appeal filled with fake legal citations generated by AI. The attorney claimed the errors were inadvertent results of using an AI chatbot to draft the document. The judge reprimanded the lawyer, noting that relying on AI without verification violated ethical and court rules. The ruling serves as a warning to lawyers that AI-generated text, including cases or statutes that don’t exist, can lead to sanctions. The attorney apologized, but the court emphasized that legal filings must be accurate and that blindly trusting AI is not a legal excuse.
Waymo self-driving car can’t be ticketed after illegal U-turn, California cops find
By
|California police officers pulled over a Waymo autonomous car after it made an illegal U-turn, but discovered no human driver to ticket. The car, operating without a driver due to recent approval of driverless cars on public roads, was allowed to continue rather than be confiscated. The officers called Waymo and learned that California law requires a driver for a traffic citation to be issued. The incident highlights legal gray areas as driverless cars become more common. State officials say their rules will soon require a remote operator for enforcement, but for now enforcement options remain limited.
OpenAI adds parental controls to ChatGPT for teen safety
By
|OpenAI has introduced new parental controls on ChatGPT to protect teen users. The controls require a joint parent-teen account connection. Once linked, a teen’s account automatically blocks explicit or violent content and disallows asking for harmful advice (e.g. self-harm instructions). Teens can disable the filters themselves but parents cannot override them to allow content. Parents also gain a dashboard with usage limits, the ability to turn off features like memory and voice, and an alert system that flags potentially distressed behavior for review by specialists. OpenAI said this follows concerns about AI harms to minors, although it cautioned that parental involvement is still crucial.
Responding to the climate impact of generative AI
By
|Rapid expansion of large AI data centers for models like ChatGPT is causing a surge in electricity use and greenhouse gas emissions. MIT researchers are now exploring ways to mitigate this climate impact. Proposed solutions include using renewable energy to power AI data centers and developing more efficient AI hardware. Another approach is to improve software – for example, refining algorithms so they need less computation. These efforts aim to ensure that AI’s environmental footprint is minimized even as its capabilities grow.
New AI system learns from many types of scientific information and runs experiments to discover new materials
By
|MIT researchers developed CREST, an AI platform that combines computer learning and lab experiments to find new materials. CREST can ingest diverse data – like images, text descriptions, and experimental results – then propose candidate materials with desired properties. In tests, the system successfully suggested novel superconducting materials faster than traditional methods. By iterating between AI predictions and real-world experiments, CREST dramatically speeds up materials discovery, which could lead to breakthroughs in energy, electronics and more.
New AI tool could accelerate clinical research
By
|Researchers at MIT have created an AI-assisted tool to speed up analysis of medical images. The system can quickly highlight and label areas of interest (like tumors or lesions) in scans, which normally requires time-consuming manual work by experts. Early tests showed the AI tool could annotate MRI and CT scans with high accuracy. By automating these tasks, the technology could help doctors and researchers study treatments and diseases faster, ultimately accelerating clinical trials and patient diagnosis.
MIT affiliates win AI for Math grants to accelerate mathematical discovery
By
|Two MIT researchers, David Roe and Andrew Sutherland, received grants from the Simons Foundation’s “AI for Math” program. Their project will advance automated theorem proving using AI, exploring ways to help computers solve complex math problems. The awards also brought back four MIT alumni who won grants. The initiative aims to speed up mathematical discovery by pairing mathematicians with AI tools that can suggest proofs or counterexamples, potentially revolutionizing how abstract problems are tackled.
New tool makes generative AI models more likely to create breakthrough materials
By
|MIT researchers released SCIGEN, an AI methodology to steer generative models toward inventing useful new materials. Normally, generative models create molecules randomly, but SCIGEN guides the AI to focus on specific desired properties (like high conductivity or stability). In experiments, SCIGEN-produced materials surpassed baseline AIs in targeted performance. The advance could help design materials for quantum computing, renewable energy or other advanced technologies more efficiently than trial-and-error lab work.
How are MIT entrepreneurs using AI?
By
|The MIT delta v summer startup accelerator showcased dozens of companies leveraging AI. Projects included new AI chips, healthcare diagnostics, AI-driven construction tools and more. One featured company is using AI for personalized nutrition plans, while another automates manufacturing processes. The report highlights that AI is reshaping the startup ecosystem: founders are rapidly iterating products powered by machine learning, attracting investor interest and challenging traditional industries.
What does the future hold for generative AI?
By
|At MIT’s first Generative AI Impact Consortium Symposium, experts in research, industry and ethics gathered to discuss the next wave of generative AI. Topics included how large language models might integrate into creativity, potential new jobs created by AI, and the ethical challenges of misinformation. Panelists emphasized AI’s vast potential in education, science and business, but also warned about bias and safety. The consensus was that collaboration between technologists and policymakers will be crucial to guide AI development responsibly.
How to build AI scaling laws for efficient LLM training and budget maximization
By
|Researchers from MIT and IBM developed a general guide for estimating the performance of large language models (LLMs) based on smaller versions. This “scaling law” framework helps predict how accuracy will improve when increasing model size, data or training compute. The method lets developers plan budgets and resources more precisely: they can see how much bigger a model needs to become to reach a target accuracy. This efficient scaling approach could save time and money in AI development by avoiding trial-and-error experiments.
Machine-learning tool gives doctors a more detailed 3D picture of fetal health
By
|MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) created an AI tool that reconstructs a 3D model of a fetus from ultrasound images. Traditional ultrasound provides 2D slices, but this system stitches multiple views into a realistic 3D representation of fetal anatomy. Doctors could then examine the virtual fetus from any angle, aiding in detection of developmental issues. Tested on real patient data, the AI-generated models matched expert annotations closely, promising a noninvasive way to monitor fetal health with more detail.
DOE selects MIT to establish a Center for the Exascale Simulation of Coupled High-Enthalpy Fluid–Solid Interactions
By
|The U.S. Department of Energy’s National Nuclear Security Administration (NNSA) has chosen MIT to lead a new Exascale Center. The Center will focus on simulating extreme environments – for example, the high-temperature shocks when spacecraft re-enter Earth’s atmosphere at hypersonic speed. Researchers will use the nation’s largest supercomputers (exascale machines) to accurately model how fluids and solids interact under those conditions. The goal is to improve safety and design of things like satellites and planetary re-entry vehicles, as well as nuclear reactors.
AI and machine learning for engineering design
By
|An MIT mechanical engineering course is applying AI and machine learning theory to real-world engineering problems. Students learn to train neural networks to optimize designs – for example, shaping components for strength or efficiency. The course uses hands-on labs and projects, showing how AI tools can aid tasks from aerodynamic design to energy systems. Instructors say teaching AI in engineering curricula helps prepare the next generation of engineers to leverage these powerful new tools.
A greener way to 3D print stronger stuff
By
|MIT CSAIL researchers created SustainaPrint, a method that reduces plastic use in 3D printing without sacrificing strength. The system analyzes a 3D model (like a phone stand or light switch) and adds reinforcement only where the geometry is weakest, leaving the rest hollow. This way, much less filament is used. In tests, SustainaPrint prints were up to 20% stronger than fully solid prints, while saving material. This approach makes eco-friendly 3D printing more practical for everyday use.
A new generative AI approach to predicting chemical reactions
By
|MIT chemists developed a generative AI model that predicts likely products of chemical reactions. Unlike older AI models, this one respects real chemistry constraints (like atom counts and bond types). Given reactants, it produces a distribution of possible outcomes, effectively “imagining” new reaction pathways. In tests on thousands of reactions, the model’s predictions matched known results and even suggested plausible novel reactions. Such AI tools could accelerate drug discovery and materials science by quickly identifying promising reaction options to test in the lab.
3 Questions: The pros and cons of synthetic data in AI
By
|MIT researcher Kalyan Veeramachaneni discusses synthetic data — artificial datasets generated by AI. Synthetic data can train AI models when real data is scarce or sensitive, helping with privacy (no personal info) and reducing collection costs. For example, it’s used in healthcare, finance and autonomous driving. However, synthetic data can introduce biases if not carefully validated, and models trained on it may not generalize well to real-world inputs. Veeramachaneni emphasizes the need to thoroughly evaluate synthetic data quality and mix it with real data when possible to get the best AI performance.
Legal AI startup Legora in talks to raise funding at a $1.8 billion valuation
By
|Legora, a Stockholm-based legal AI company, is negotiating a new funding round that would value it at about $1.8 billion. The startup, which automates legal tasks like contract reviews and research, has attracted major investors. This new round comes just four months after Legora’s Series B, reflecting strong growth. The fresh capital will help expand its technology and international customer base as demand for AI tools in law surges.
Meet the Stanford dropout building an AI to solve math’s hardest problems — and create harder ones
By
|A teenager who left Stanford has launched Axiom Math, an AI company aiming to tackle very difficult math problems. The startup has attracted top talent (many from Meta’s AI research team) and raised about $64 million. Its AI system attempts to prove mathematical theorems and even invent new hard problems. Axiom’s goal is to push the boundaries of what AI can understand in pure math, potentially creating tools that help mathematicians generate proofs or discover new relationships in number theory and geometry.
No Pixel 10 needed: Google Photos’ conversational editing comes to all Android devices
By
|Google announced that smart editing in Google Photos, originally exclusive to Pixel 10 phones, will be available on all Android devices. The feature lets users type natural language commands to edit photos (e.g. “brighten the sky” or “remove that person in the photo”). This democratizes a powerful photo editing tool, previously limited by device hardware, and could significantly change how casual users edit images.
Zelenskyy’s UN warning: Regulate AI in weapons before it’s too late
By
|Ukrainian President Volodymyr Zelenskyy told the United Nations that the international community must regulate artificial intelligence in weapons. Speaking amid ongoing tensions, he urged world leaders to establish rules so autonomous weapons systems cannot be deployed without safeguards. The warning highlights concerns that AI-driven warfare could escalate conflicts. Zelenskyy’s appeal adds pressure for agreements on AI arms control, similar to treaties on nukes and drones, to prevent future misuse of AI in combat.
Spotify tightens AI policy and trims its music catalog
By
|Spotify has updated its terms to ban unauthorized AI-generated music and is proactively removing songs flagged as AI-created without artist consent. The streaming giant says this protects artists and fans from fraudulent or copyrighted content. By sharpening its AI guidelines and pruning questionable tracks, Spotify aims to maintain trust in its catalog. The move comes after analysis found a rise in clandestine AI music uploads across the platform. Major publishers applaud the effort as a step toward fair compensation and authenticity in music.
Massive leap for AI neoclouds with deal between Nebius and Microsoft
By
|Nebius, a cloud infrastructure startup, announced a deal with Microsoft worth up to $19.4 billion to build GPU-accelerated “neoclouds” for AI. This partnership will see Microsoft integrate Nebius’s AI-engineered data center solutions into its Azure network, vastly increasing the computational power available for AI workloads. Experts say the move could reshape the AI data center market by making massive GPU farms more energy-efficient and geographically distributed, allowing more companies to run large AI models.
Why ChatGPT’s “Buy It” function reshapes e-commerce
By
|ChatGPT’s new “Buy It” feature, powered by integrations with platforms like Shopify, is set to transform online shopping. Instead of just searching, users can now finalize purchases within the chat interface. This could change how e-commerce works: customers may increasingly turn to AI assistants for personalized shopping, and merchants will adapt strategies for AI-driven sales. Early adopters see this as a game-changer in retail, blending conversational AI with buying and potentially redefining customer engagement.
Firewalls are old-school: AI needs new approaches
By
|Traditional cybersecurity tools like firewalls are proving inadequate in an AI-driven world of rapid attacks and sophisticated social engineering. As AI can automate phishing at scale and even impersonate people convincingly, experts argue organizations need new defenses. These include rigorous inspection of AI-generated content, tighter AI model access controls, and continuous monitoring of AI behavior. Companies are investing in AI-aware security solutions that can detect anomalies in real time. The article suggests a multi-layered approach combining AI-based defenses with updated best practices.
A way forward for AI investors
By
|Investing in AI is becoming clearer with some emerging principles. The article argues that investors should focus on big platform leaders (like Google, OpenAI, Microsoft) which control crucial AI hardware and software. It suggests looking at companies that solve important problems (healthcare, energy) with AI – these may yield real returns. Industries that use AI as a backbone (like autonomous vehicles) are promising. The piece also highlights “picks-and-shovels” plays, meaning hardware makers of GPUs or AI chips, as steady bets. In short, broad-market plays in dominant AI platforms and specialized AI tech are the recommended route.