blank

AI CAN READ OUR HORSE’S EMOTIONS

And it’s more accurate than you might think

For centuries, we have relied on feel, intuition, and hard-won experience to interpret their horse’s emotional state. Now, artificial intelligence is learning to do the same thing, and at a speed and scale no human can match.

A series of landmark studies published in 2024 and 2025 have demonstrated that deep learning models can decode horse emotional states from facial expressions and video footage with meaningful accuracy. The implications for welfare monitoring, competition veterinary assessment, and everyday horse management are significant, and the science behind it is genuinely fascinating (if a little terrifying!)

The research

A study published in PLoS ONE, subsequently corrected and updated in 2025, represents a milestone in the emerging field of animal affective computing. Researchers from the University of Haifa in Israel, working in collaboration with the University of Lincoln in the UK, developed and tested AI models capable of distinguishing between four distinct emotional states in horses: baseline calm, positive anticipation, disappointment, and frustration.

Using video footage collected during controlled experiments, the team trained a deep learning pipeline to analyse horses’ facial expressions – the same subtle muscular movements mapped by the Equine Facial Action Coding System (EquiFACS) that researchers use to objectively document equine emotion. The results were striking: the model achieved 76% accuracy in separating the four emotional states – a remarkable figure for a task that challenges even experienced equestrians.

Some distinctions proved harder than others. Anticipation and frustration – two emotionally adjacent states that can look superficially similar in facial expression – were correctly separated only 61% of the time. This is, the researchers noted, consistent with what we see in human attempts to distinguish these states in horses, suggesting the AI is hitting a genuine ceiling that reflects the complexity of equine emotional expression rather than a limitation of the technology alone.

What the AI is looking at

What makes this research particularly interesting is how closely the AI’s approach mirrors the methodology that equine behaviour scientists have spent decades developing. EquiFACS, the facial coding system underpinning much of this work, catalogues the specific muscle movements – called action units – that correspond to different emotional and physiological states in horses.

Key areas the system analyses include the position and movement of the ears, the tension and shape of the muscles around the eyes and brow, the degree of white showing in the eye (scleral exposure), the tightness of the nostrils, and the position of the muzzle. Each of these cues carries information about a horse’s internal state. Combined and analysed at speed, they build a picture of emotional valence and arousal that trained human observers can take years to develop sensitivity to.

Separately, a 2025 study in Scientific Reports demonstrated that deep learning models could assess the welfare of ridden horses from video alone – a capability with direct implications for competition environments where the assessment of horse wellbeing is currently dependent on human judges who may miss subtle signals under time pressure.

A new chapter in welfare monitoring

The most compelling application of this technology is in real-time welfare monitoring. Researchers have already demonstrated that AI-powered accelerometers can detect colic in horses on average 20 minutes before the pain peak and four minutes before clinical onset, potentially transforming emergency response in yards where horses may be unsupervised overnight.

Extending this capability to emotional states opens the door to automated welfare surveillance systems that can flag distress, frustration, or chronic negative affect in horses regardless of whether a skilled observer happens to be present. For horses in competition, rehabilitation, or equine-assisted therapy – settings where welfare monitoring is both critical – the potential is considerable.

Stable-based cameras already monitor for stereotypic behaviours like cribbing and weaving. The next generation of these systems is beginning to incorporate AI-based emotion recognition, capable of detecting not just gross behavioural abnormalities but subtler, earlier indicators of compromised well-being.

What this doesn’t replace (at least in our opinion!)

It is important to be clear about what AI emotional recognition tools are, and what they are not. They are not a replacement for experienced, attentive horsemanship. They do not interpret context, history, or individual variation in the way that a skilled rider or behaviourist can. Furthermore, the science is still developing in this area: 76% accuracy, while impressive as a research result, is not yet the kind of reliability needed for high-stakes welfare judgements in clinical or competitive settings.

What these tools represent is something genuinely valuable nonetheless: an objective, reproducible, fatigue-proof layer of observation that can work alongside human expertise rather than replacing it. In a field where welfare assessment has historically relied almost entirely on subjective human judgement, that additional layer of objectivity is precisely what the science has been calling for.

The day when your yard’s camera system sends you an alert because your horse’s facial expressions suggest frustration or discomfort during a training session may not be as far away as it sounds.


Sources: “Automated recognition of emotional states of horses from facial expressions,” PLoS ONE, 2024/2025, University of Haifa and University of Lincoln; “Using deep learning models to decode emotional states in horses,” Scientific Reports, 2025.

Shopping Basket
Scroll to Top
blank

HQ Newsletter

Get all the latest content and news delivered to your email.