AI-Powered Animal Behaviour Research Transforms Neuroscience

Image Credit: Gwen Weustink | Splash

The intersection of artificial intelligence and neuroscience is reshaping how scientists study animal behaviour, offering unprecedented precision and efficiency in laboratory research. Once reliant on rudimentary tools and human observation, researchers are now harnessing advanced machine learning algorithms and continuous monitoring systems to unlock deeper insights into the brain, with implications for understanding mental illnesses like addiction, depression, and Alzheimer’s disease. This technological leap, spotlighted by efforts from scientists like Vivek Kumar and Talmo Pereira, marks a significant evolution in the field of computational ethology, blending AI with biology to transform data collection and analysis.

[Read More: What If You Could Understand Your Pet's Words? AI Brings Us Closer to Interspecies Translation]

From Simple Dots to Sophisticated Systems

In the late 2000s, neurogeneticist Vivek Kumar, then a postdoctoral researcher at the University of Texas Southwestern Medical Center, began his journey into behavioural research by screening thousands of mice for subtle shifts in activity and responses to psychostimulants. At the time, the technology was basic: a computer program reduced a mouse’s movements to a single dot on a screen, tracked for mere minutes in a testing chamber. Kumar quickly identified the shortcomings of this method. “The dimensionality and the complexity of what the animal does versus what we’re seeing on the computer, what we’re capturing and we’re studying, is just like night and day”, he remarked, highlighting how nuanced behaviours like grooming—comprising paw licking, face washing, and body licking—were lost in translation.

Determined to bridge this gap, Kumar, now at the Jackson Laboratory, shifted his focus to developing AI-driven tools for more precise behavioural quantification. His work has led to the creation of advanced home-cage systems, such as the DAX system, a high-tech unit featuring integrated cameras, lighting, and cloud connectivity via the Digital In Vivo System (DIV Sys). Using neural networks, DAX processes video footage to outline individual mice, track key body points, and distinguish between them, generating over 60 variables like posture, body mass, and specific actions such as grooming or freezing. This wealth of data, processed through machine learning, provides a detailed behavioural profile far beyond what earlier methods could achieve.

[Read More: AI-Powered Pain Detection in Goats: Groundbreaking Model Could Transform Animal & Human Care]

Computational Ethology Takes Flight

Kumar’s efforts coincide with a broader resurgence of interest in naturalistic animal behaviour, propelled by advancements in computational science. This movement has given rise to computational ethology, a discipline that leverages AI tools like machine learning and computer vision to analyze behaviour with a level of detail once unimaginable. Talmo Pereira, a computational neuroscientist at the Salk Institute, explains the shift:

In classic ethology, what comes to mind is something like a very dedicated graduate student sitting in a bush with a notepad…Now, with the advent of these computational tools, we’ve been able to have the same degree of fidelity in the way that we quantify complex behaviours”.

Pereira’s contribution includes the Social LEAP Estimates Animal Poses (SLEAP) algorithm, developed during his PhD at Princeton Neuroscience Institute. This open-source tool uses deep learning to label and track body parts in video footage, adaptable to species ranging from fruit flies to whale sharks. Researchers worldwide have applied SLEAP to diverse studies, such as analyzing social isolation in bumblebees or collective temperature responses in mice, demonstrating its versatility and accessibility. By automating posture tracking, SLEAP eliminates the subjectivity and labour of manual observation, opening new experimental possibilities.

[Read More: AI-Powered Facial Recognition Revolutionizes Sea Turtle Conservation Across Australian Waters]

Smart Cages and Translational Science

At University College London, neuroscientist Julija Krupic has taken a similar AI-driven approach with her smart-Kage system, designed to study brain regions like the hippocampus and entorhinal cortex, which are critical for memory and spatial navigation—and early targets in Alzheimer’s disease. Facing the demands of translational research, where larger sample sizes necessitate automation, Krupic’s team developed a fully automated home-cage system. Equipped with a top-mounted camera and deep learning algorithms, smart-Kage conducts cognitive tests like the T-maze and novel object recognition without human interference, capturing continuous behavioural data.

The development of smart-Kage, now commercialized through Cambridge Phenotyping, was aided by affordable hardware like Arduino and Raspberry Pi, reflecting how accessible technology is democratizing innovation. Research teams globally are already using the system to explore conditions ranging from circadian rhythm disorders to Down Syndrome, underscoring its potential to accelerate discovery.

[Read More: AI Breakthrough: Proteins Designed to Neutralize Deadly Snake Venom]

Benefits Beyond Data: Welfare and Reproducibility

AI-powered home-cage monitoring offers more than just enhanced data collection—it’s improving animal welfare and research reliability. Traditional methods often involve handling and cage transfers that stress lab animals, potentially skewing results. Continuous surveillance in a naturalistic setting minimizes these disruptions, providing a truer reflection of behaviour. Studies suggest this approach boosts reproducibility by reducing human bias and enabling consistent, round-the-clock observation—crucial for nocturnal species like mice, often tested out of sync with their natural cycles.

Animal welfare researcher Sara Fuochi notes another advantage: continuous sensor monitoring can flag health issues, such as reduced activity, alerting scientists to intervene promptly. Additionally, longitudinal studies enabled by these systems could reduce the number of animals needed, as repeated observations on the same subjects replace larger cohorts.

[Read More: AI Companion Robots Gain Popularity Amid Rising Loneliness Epidemic]

Challenges and Future Horizons

Despite these advancements, challenges persist. High-tech systems like Envision (an updated DAX/DIV Sys commercialized with Allentown) are engineering marvels but come with steep costs, limiting widespread adoption. Open-source options like SLEAP are more affordable but require technical expertise, posing barriers for some labs. Pereira highlights a technical hurdle: accurately tracking multiple animals over extended periods. Even minor errors in identity tracking—mixing up experimental and control subjects, for instance—can compromise results when recording 24/7.

Data sharing could amplify these technologies’ impact, with Kumar envisioning platforms where researchers exchange algorithms and footage. Fuochi agrees, noting that repurposing data could further reduce animal use, though she stresses the need for clear policies on ownership and intellectual property. Krupic, meanwhile, advocates for standardization to enhance reproducibility but questions whether a few unified systems or diverse, transparent approaches are the best path forward—a debate she says has yet to fully emerge.

[Read More: Unlocking the Language of the Deep: Can AI Decode the Secrets of Sperm Whale Communication?]

License This Article

Source: The Scientist

TheDayAfterAI News

We are your source for AI news and insights. Join us as we explore the future of AI and its impact on humanity, offering thoughtful analysis and fostering community dialogue.

https://thedayafterai.com
Previous
Previous

ChatGPT Deep Research vs Grok 3 DeepSearch: Which AI Wins?

Next
Next

Examining Grok 3’s “DeepSearch” and “Think” Features