Slipknot co-founder M. Shawn Crahan has publicly endorsed AI as a creative aid, distinguishing between assistive tools and generative replacements. His comments come as the music industry shifts toward licensed AI models following settlements between major labels and tech firms, alongside evolving regulations in Australia and new tagging policies on streaming platforms.
Australia is expanding its online safety regulatory framework to include specific compliance obligations for AI chatbots and companion services. Under the eSafety Commissioner’s new industry codes, providers face phased deadlines through March 2026 to implement safety measures—such as age assurance and risk assessments—designed to prevent minors from accessing restricted or harmful material.
As artificial intelligence data centers drive a surge in electricity demand, US federal agencies and state policymakers are implementing new regulatory frameworks to manage the strain on the power grid. The Federal Energy Regulatory Commission (FERC) has directed PJM Interconnection to establish transparent rules for large, co-located loads to safeguard system reliability. Simultaneously, individual states are enacting distinct laws to limit cost-shifting onto residents and manage local infrastructure impacts, creating a complex dual-governance landscape for energy and technology sectors.
Researchers from NVIDIA and the University of Washington have introduced "long range distillation," a method that utilizes 18,000 years of synthetic climate data to train AI models for improved subseasonal to seasonal weather forecasting, addressing data scarcity in historical records.
On December 22, 2025, OpenAI announced a security update for the ChatGPT Atlas browser agent aimed at reducing the risk of prompt injection attacks. The update follows internal red teaming that identified new exploit classes, leading to the deployment of an adversarially trained model designed to better resist malicious instructions embedded in web content. OpenAI describes this update as part of a continuous "proactive rapid response loop" to address evolving security challenges in agentic browsing.
ABC News reports on the development of MIA, a mental health chatbot from the University of Sydney designed to triage users using clinical frameworks rather than general conversation. Unlike general-purpose AI, MIA relies on a closed knowledge bank to minimize errors and aligns with Australian government referral standards (IAR). The system is currently in testing with a public release expected in 2026.
South Australian startup SYNC has introduced a hard seltzer brand that utilizes an AI-enabled feedback loop to guide flavor development. Founded by Denham D’Silva and Trent Fahey, the company recently piloted an "AI bar" activation where customers received personalized beverages based on digital questionnaires. The initiative, highlighted by the Australian Institute for Machine Learning (AIML), aims to use consumer taste data to determine future product releases.
The Higher Education Research and Development Society of Australasia (HERDSA) has circulated a notice regarding an online workshop on digital ethics and Generative AI hosted by Hong Kong Baptist University. Scheduled for 14 January 2026, the session will address student perspectives on AI usage in mental health support and assessment, referencing frameworks for responsible AI in education.
Snap has introduced Animate It, an AI-powered Lens that allows users to generate animated video clips from text prompts. The feature is available to Lens+ subscribers and represents Snap's first open-prompt video generation tool, following the release of themed AI video lenses in March 2025.
On 22 December 2025, the U.S. Federal Communications Commission (FCC) expanded its "Covered List" to include foreign-made uncrewed aircraft systems (UAS) and their critical components. This regulatory change prevents new drone models from receiving the equipment authorizations required for legal import and sale in the United States, although it does not impact drones already in use or previously authorized models. The decision follows a national security determination citing risks related to unauthorized surveillance and data exfiltration. Notably, the new restrictions encompass "associated software," directly affecting the digital stack that powers modern AI-assisted navigation and perception in autonomous flight systems.
Singapore-based medtech firm Aevice Health has secured regulatory approval from the Therapeutic Goods Administration (TGA) to supply its AeviceMD respiratory monitoring system in Australia. Listed as a "thoracic acoustic recorder," the wearable device utilizes software-driven pattern detection to monitor chest sounds and detect wheezing. By leveraging the Comparable Overseas Regulator pathway via Singapore’s Health Sciences Authority, the system has transitioned into the Australian market to support remote patient monitoring and clinical review for chronic respiratory conditions.
On December 25, 2025, techAU’s Jason Cartwright announced the release of his debut EP, "Vibe Code to This," produced through a human-directed AI workflow. The project utilized a suite of generative tools, including Suno v5 for music composition, Google Gemini Gems for iterative prompting and lyric development, and the Gemini Nano Banana model for cover art generation.
A recent report on AI-human interactions has prompted major platforms to implement stricter guardrails for minor users. Character.AI, Meta, and Snapchat are rolling out age-based restrictions and parental tools, while New York and California introduce new legal frameworks to regulate AI companion models specifically.
The ACT Government has opted for a "watching brief" regarding the regulation of AI-generated deepfakes in elections, signaling it is not yet ready to update the Territory's electoral laws. While an ongoing Legislative Assembly inquiry explores the risks posed by synthetic media before the 2028 election, officials expressed caution about creating rules that could quickly become obsolete. This position contrasts with South Australia’s recently implemented model, which prohibits deceptive AI content without consent or labeling. Federal authorities also maintain a focus on voter education and existing transparency requirements rather than a blanket ban on generative AI.
AUSTRAC acting CEO Katie Miller has signaled a growing regulatory focus on how financial institutions use generative AI for report drafting. While automation can accelerate compliance workflows, the agency warns that high volumes of generic, AI-authored suspicious matter reports risk obscuring critical intelligence. This move coincides with increased scrutiny on non-financial risk management across the Australian banking sector.
Japanese municipalities are adopting automated AI detection pipelines to manage a surge in bear sightings. The B Alert system uses cloud-based filtering to streamline camera data, enabling quicker notifications via disaster radio and email. This technology, alongside drone and 5G trials, represents a significant shift toward digital wildlife countermeasures.
Australia’s landmark social media minimum age regime, which officially commenced on 10 December 2025, is heading to the High Court for a major constitutional test. The legal challenge, supported by the Digital Freedom Project and Reddit, disputes the validity of the under-16 ban and raises significant questions about the privacy of age assurance technologies. With key hearings scheduled for early 2026, the outcome will determine how the law defines social media platforms and the extent to which the government can regulate digital access for young Australians.
The myGov contact page currently advises users to utilize the Digital Assistant as a primary step for support. As of 23 December 2025, the page explicitly positions the tool as an "any time" option for answering common questions about accounts and linked services. This guidance appears alongside standard helpdesk details, reinforcing a self-service approach for routine inquiries without relying on generative AI.
As generative AI shifts from brainstorming to building, Figma and Manus present diverging paths for product development. This report examines Figma’s integration of the Model Context Protocol (MCP) to bridge designs and code, versus Manus AI’s all-in-one agent workflow powered by Google’s Nano Banana Pro. Beyond features, the analysis addresses critical governance concerns, including Figma’s late-2025 data litigation and the regulatory scrutiny surrounding Manus’s corporate structure.
The latest Adobe Photoshop 2026 update (version 27.2) expands the Generative Fill feature by adding FLUX.2 pro as a partner model option. This change introduces a tiered generative credit system for different AI engines, allowing users to choose between Adobe’s Firefly and external partner models like Google Gemini 3 (Nano Banana Pro) based on their specific creative requirements and credit budget.
As drone gifts surge during the holiday season, Australia’s Civil Aviation Safety Authority (CASA) highlights the importance of operator accountability. While AI-driven features like obstacle avoidance simplify flight, they do not exempt pilots from legal requirements such as maintaining visual line of sight and staying clear of restricted airspace. This report details current safety regulations, enforcement penalties, and the ongoing digital transition toward automated airspace approvals.
New Zealand legal technology company LawVu has acquired Belgian contract automation specialist ClauseBase and launched a new AI-powered analysis tool, LawVu Lens. Announced in December 2025, the acquisition rebrands ClauseBase as "LawVu Draft," bringing intelligent drafting and Microsoft Word integration directly into the LawVu platform. Alongside this, the launch of LawVu Lens provides legal teams with an embedded service for large-scale contract data extraction and repository analysis. Backed by a recent $9 million funding milestone and a reported $350 million valuation, LawVu’s expansion aims to unify fragmented legal workflows into a single, AI-integrated workspace.
The Australian Productivity Commission has released its final report, Harnessing data and digital technology, recommending that the government monitor the impact of AI on copyright for three years rather than introducing immediate legislative exceptions. This approach prioritizes the development of voluntary licensing markets, particularly within the music industry, to ensure creators retain control over their intellectual property. The recommendation aligns with recent government statements favoring collective licensing over a blanket text and data mining exception.
A global research review led by the University of Sydney reveals that artificial intelligence is being integrated into general practice clinics faster than safety evaluations and regulations can be established. While tools such as digital scribes offer administrative relief, the study highlights critical concerns regarding clinical accuracy, patient consent, and the need for standardized oversight in healthcare settings.
Public data on Grok’s usage in Australia appears contradictory. This report analyzes why Similarweb ranks the AI chatbot highly while StatCounter excludes it entirely, clarifying the technical distinction between web traffic rankings and referral header data.
NBN Co and RMIT University have established a new research collaboration under the ASTRID program to develop a digital twin of the nbn network. The project aims to utilize AI and large datasets to model future network scenarios, enhance resilience against extreme weather, and optimize long-term infrastructure planning.
Motorola has launched the Edge 70, a smartphone distinguishing itself with a two-tiered AI strategy: internal "moto ai" processing for image capture and Google Photos tools for post-editing. This report outlines the device's technical specifications, including the Snapdragon 7 Gen 4 chipset, and clarifies regional variations in battery capacity and pricing between global and Indian markets.
A cyber incident at the University of Sydney has resulted in the unauthorized access and download of historical data files containing personal information for approximately 27,500 individuals, including current and former staff, students, and alumni. While no evidence of data misuse has been detected as of late December 2025, cybersecurity authorities warn that such breaches increase the risk of sophisticated, AI-driven impersonation and social engineering scams. The University has secured the affected environment and is in the process of notifying those impacted.
Beesoft Solutions has launched EV Evolution, an information platform centered around a custom-trained AI chatbot tailored for the Australian electric vehicle landscape. The platform provides users with information on EV availability, charging infrastructure, and regional government incentives. It aims to reduce the complexity of researching electric vehicle ownership by consolidating data into a conversational interface.
AI Academy
Business & Economy
Chatbot Development
Digital Security
Environment & Science
Weekly Highlighted Videos
Curated and recommended by our team, these videos showcase the latest trends, breakthroughs, and insights in AI and technology. We select content from trusted creators to keep you informed and inspired.
Have a video you'd like featured? Contact us today!

As of December 2025, the Australian Government has transitioned to Version 2.0 of its "Policy for the responsible use of AI in government." Managed by the Digital Transformation Agency (DTA), the framework establishes mandatory requirements for non-corporate Commonwealth entities. Key features include the introduction of an AI impact assessment tool, the creation of agency-wide use case registers, and mandatory staff training. While the policy includes national security exemptions for the defense and intelligence sectors, it aims to standardize transparency and risk management across general government operations.