Ongoing Projects

Last updated: November 11, 2025

AIS Support Fund for Interdisciplinary Research Collaboration, "A Scalable Framework for Responsible AIGC Governance and Innovation", 2025-2026.

Project Overview: This project tackles the risks of AI-generated content (AIGC) by developing a multi-level governance framework. It will create a taxonomy to categorize AIGC risks like deepfakes and algorithmic biases, using empirical analysis and expert interviews to identify high-risk scenarios.

The project will explore AIGC’s impact on public trust and media credibility through case studies. It will co-design technical solutions, such as explainability tools and content detection systems and integrate them into media workflows. Governance efforts will include creating the Hong Kong AIGC Risk Database, evaluating standards through adversarial testing, and developing policy toolkits informed by global governance models. The project aims to deliver a scalable framework for responsible AIGC innovation.

Principal Investigator: Celine Yunya SONG (HKUST)

Collaborators: Janet Hsiao (HKUST), Masaru Yarime (HKUST), Yangqiu Song (HKUST)

SBM Strategic Fund for Futuristic Business Research, "Generative Artificial Intelligence as a Cognitive Partner for Human Responders", 2025-2027.

Project Overview: This project pioneers a new paradigm for human–AI interaction by reframing Generative AI (GenAI) as a cognitive partner rather than a content generator. Instead of producing text on behalf of users, GenAI engages them in AI-mediated meta-communication—a reflective process that prompts clarification, elaboration, and refinement of thought. Through this interaction, individuals learn to express their ideas with greater clarity, specificity, and persuasiveness.

The study moves beyond traditional single-domain experiments by adopting a multi-domain, multi-level experimental framework. Each experiment situates GenAI in a distinct communicative context—such as personal feedback, career advising, economic forecasting, journalism, and gendered self-promotion—to examine how AI scaffolds human reasoning across diverse cognitive demands, levels of expertise, and communication asymmetries. This comparative design enables a deeper understanding of how AI can support human cognition in varied real-world situations.

Methodologically, the project introduces several innovations. A proof-of-concept pipeline integrates real-time AI prompts into open-ended responses, followed by behavioral and perceptual evaluations by external raters. Hybrid evaluation metrics combine linguistic algorithms (e.g., clarity, concreteness) with behavioral outcomes (e.g., forecast accuracy, writing quality) and user experience data (e.g., effort, satisfaction). Cross-domain experimentation operationalizes the “AI-as-partner” model across contexts, while an equity testing framework examines how AI-mediated reflection shapes self-promotion behaviors across gender, yielding both theoretical and policy-relevant insights.

By conceptualizing AI as a meta-communicator—an active co-thinker that enhances reasoning and reflection—this project redefines the boundaries of communication science and human–AI collaboration. Its open-source tools and evaluation rubrics will inform the design of AI-enhanced education, journalism, and policymaking interfaces, advancing a more reflective, equitable, and human-centered AI future.

Investigators: David Hagmann (PI, HKUST), Yang Lu (Co-PI, HKUST), Celine Yunya Song (Co-PI, HKUST), and George Loewenstein (Co-PI, Carnegie Mellon University)

RGC - Early Career Scheme, "Expressive and Controllable AI Music Creation based on Audio-Oriented Representation Analysis on Large-scale Data", 2023-2025

Project Overview: This project addresses the limitations of current AI music creation (AMC) by developing a pioneering audio-oriented, end-to-end framework that unifies composition, performance control, and audio synthesis. Instead of relying on symbolic intermediaries that discard timbre, emotion, and multi-track effects, it treats finished audio as the sole workspace, enabling large-scale disentangled representation learning to capture expressive, harmonic, and timbral qualities in a single model. A comprehensive, open-source dataset of high-resolution audio recordings will be curated and released to fuel the development of algorithms. Guided by human-like perception, the project will jointly optimise all musical factors through these learned representations, allowing users to flexibly control style, emotion, instrumentation, and harmony in multi-track generation. The exact representations will support precise “re-creation” of existing pieces—enabling note-level editing, timbre swapping, or emotional re-targeting without re-recording. The deliverables—dataset, open models, and editing toolkit—will advance academic research on artificial creativity while equipping the media, entertainment, and metaverse industries with controllable, culturally aware music generation capabilities.

Principal Investigator: Wei XUE (HKUST)

Collaborators: Qiuqiang KONG (CUHK), Xu TAN (Formerly Microsoft)

National Natural Science Foundation of China, "Research on Multichannel Speech Enhancement based on Acoustic Environment Representation Learning", 2023–2025

Project Overview: This project confronts the fragility of speech processing in noisy, real-world settings by introducing a unified “acoustic-environment representation” that fuses spatial cues, temporal-spectral texture, and noise statistics into one compact, learnable vector. Instead of cascading separate estimators for direction-of-arrival, coherence, and interference power—each demanding oracle knowledge of the others—the framework jointly optimizes every acoustic factor end-to-end, enabling spatial filters to be generated directly from the representation with minimal prior assumptions. A continuously updated memory module will enable the system to self-adapt on-the-fly to new data captured, thereby closing the loop between environmental sensing and enhanced performance.

Principal Investigator: Wei XUE (HKUST)

Collaborators: Qiuqiang KONG (CUHK)

RGC - Theme-based Research Scheme, "Building Platform Technologies for Symbiotic Creativity in Hong Kong", 2021-2026

Project Overview: In the rapidly evolving landscape of a technology-driven world, the fusion of arts and technology has birthed Art Tech, fundamentally transforming the creation, reception, and interaction with arts and culture. This research project positions itself at the intersection of arts and science, leveraging cutting-edge science and technology to revolutionize human-AI interactions. By focusing on the sustainability of Hong Kong’s arts ecosystem, it explores AI-driven Art Tech opportunities to invigorate the city’s cultural scene, fostering innovative modalities of artistic production and consumption that promise substantial socio-economic ripple effects across business, healthcare, and education sectors. Drawing on interdisciplinary expertise from AI, cognitive science, and the arts, the project pursues three core tasks: developing algorithmic systems for artefact creation informed by cognitive, physiological, and behavioral data; pioneering immersive XR platforms for artistic delivery and audience engagement in education; and deploying applications to enhance human creativity. Anticipated outcomes include dedicated application projects for global technology testing, a groundbreaking Research Theatre for Art Tech innovations, and a comprehensive Digital Art and Policy Network connecting Hong Kong to international developments. Harnessing AI advancements, this initiative is primed to reshape the art world and creative industries, delivering profound social, educational, and economic advantages in Hong Kong and the Greater Bay Area, while providing an interdisciplinary framework to tackle post-COVID societal challenges and drive inclusive socio-cultural-economic progress.

Project Coordinator: Yike GUO (HKUST) Co-Project Coordinator: Johnny POON (HKBU) Co-Investigators: Wei XUE (HKUST) Jie CHEN (HKBU)