HKU Fertility Study Faces Scrutiny After AI Reference Errors in Paper on 69% Marriage Decline Impact

Image Credit: Abenezer Shewaga | Splash

A recent academic paper from the University of Hong Kong on the city's fertility decline has drawn scrutiny after artificial intelligence tools generated unverifiable references, prompting an apology from its senior author and highlighting risks in AI assisted research.

The article, titled "Forty years of fertility transition in Hong Kong", appeared in the Springer published journal China Population and Development Studies in October 2025. It dissects four decades of demographic data from 1981 to 2021, concluding that a 69 percent drop in marriage rates drove most of the territory's total fertility rate fall to 0.77 births per woman by 2021, with smaller contributions from declines in marital and non marital fertility.

The AI Driven Citation Errors

At the heart of the controversy lies the unchecked application of AI for reference compilation. Lead author Yiming Bai, a doctoral candidate in the university's Department of Social Work and Social Administration, employed AI software to assemble citations but overlooked verification steps. The paper lists 61 references, yet online checks revealed several as fabrications, including invalid digital object identifiers and entries from phantom journal issues.

One prominent example involves a citation to Mary C Brinton's supposed 2019 work, with a DOI that resolves to a different work. Another flagged entry erroneously credits corresponding author Paul Yip as a co writer on a non existent 2022 study in the Journal of Family Issues. These errors stem from AI "hallucinations", where models invent plausible but false details to fill queries.

The core analysis, drawing on official Census and Statistics Department figures and a decomposition model aligned with the Second Demographic Transition theory, remains intact, as confirmed by Yip. Peer reviewers had approved the manuscript after two revision rounds, unaware of the reference flaws.

Discovery Through Online Vigilance

The flaws were highlighted online and picked up by local media in late October 2025. Users cross referenced DOIs and journal archives, posting screenshots of dead links and mismatched metadata. No formal complaints appear in records, though the chatter amplified concerns over research integrity in a field grappling with Hong Kong's real world fertility crisis, exacerbated by high housing costs and work life imbalances.

This episode echoes broader patterns in AI adoption for scholarly tasks. Since tools like ChatGPT gained traction in 2023, studies from outlets such as Nature have documented rising instances of fabricated citations in social sciences. For instance, a Springer Nature book on machine learning published in 2025 contained two thirds fabricated citations out of those checked. Separately, research highlights burdens on non native English speakers in academia, who may turn to AI for efficiency in literature reviews amid competitive pressures. In Hong Kong's competitive academia, where grant pressures mount amid a shrinking population, such shortcuts risk undermining trust.

Responses and Accountability Measures

Paul Yip, a professor in the same department and associate dean of the Faculty of Social Sciences, addressed the matter on November 9, 2025, in statements to local media including Dim Sum Daily. He acknowledged Bai's AI reliance as a first time experiment gone awry, accepted full oversight blame as corresponding author, and expressed regret for any dent to the university's standing or the journal's credibility.

Yip has since manually audited every citation for a forthcoming corrected version, set for upload within days after manual verification, with the publisher to issue an erratum notice. Bai, described by Yip as a high performing researcher otherwise, faces mandatory AI literacy training before further publications. The University of Hong Kong has not released an official statement, but Yip's actions align with its guidelines, which mandate disclosure of AI use in submissions.

Co authors, including statisticians from HKU and mainland China's Jilin University, contributed data inputs rather than the bibliography, insulating them from direct fault.

Broader Context in AI's Academic Integration

This incident unfolds against a backdrop of rapid AI permeation in higher education. Hong Kong universities, including HKU, rolled out guidelines in 2025 urging cautious AI employment for drafting and analysis. Globally, incidents like the 2025 Springer Nature book with substantial fake references underscore verification gaps, particularly in interdisciplinary fields blending demographics and policy.

The why boils down to efficiency temptations: AI accelerates sourcing amid voluminous databases, but without human checks, it propagates errors. Here, Bai's unvetted output bypassed standard protocols, possibly due to thesis deadlines in a post pandemic rush.

Implications and Forward Paths

Reputational ripples could linger for HKU, ranked among Asia's top institutions, as funders scrutinise AI protocols. For the journal, the correction bolsters transparency but may invite audits of similar submissions. On a societal level, the paper's valid insights into marriage barriers, like unaffordable homes averaging 14.4 times median income in 2024, retain value for policymakers eyeing incentives such as expanded childcare subsidies.

Looking ahead, experts anticipate stricter norms. Some observers advocate stronger screening and verification in submission workflows. Some universities, such as the University of Melbourne, provide GenAI guidance and training and use AI detection in coursework contexts. In Hong Kong, where fertility policies evolve amid Beijing's influence, such lapses risk eroding public faith in evidence based reforms. McKinsey analyses suggest generative AI could enable labour productivity growth of 0.1 to 0.6 percent annually through 2040, demanding balanced innovation without compromising rigour.

This case serves as a timely reminder: AI augments, yet human diligence defines scholarly worth.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

OpenAI Rolls Out GPT 5.1 to 800M Users with 8 New ChatGPT Personalities

Next
Next

Study Finds AI Models Increase Disinformation by Up to 188% When Competing for Engagement