Through the Looking-Glass: Three Ways Advancements in Artificial Intelligence Will Change Learning & Development

Artificial intelligence could soon transform learning by augmenting human empathy and judgement with rich, context-sensitive information.

Through the Looking Glass_banner-teaser

Background: AI Is Changing Every Aspect in Public Service

Singapore’s Digital Government Blueprint,1 recently revised in 2020, has set out a strategic plan to use data, connectivity, and computing to improve how every agency operates, delivers services, and engages stakeholders. Artificial Intelligence (AI) plays a big role in this plan, holding the promise of making public services seamless and integrated for our citizens.

AI also has the potential to transform Learning and Development (L&D), the field for which the Civil Service College (CSC) is the lead public sector agency, by offering new affordances brought about through advances in AI, Machine Learning and Deep Learning. Singapore’s general- and higher-education sectors have outlined plans to personalise learning as part of the national push to use AI in the National AI Strategy, launched in 2019.2 While the education sector is on track to meet their goals, L&D functions in government—and adult learning in general—could benefit from AI-related concepts such as adaptive learning. With the COVID-19 pandemic accelerating the pivot to digital means for work and learning, it is timely to reimagine how learning could be in the Public Service.

We believe AI-driven shifts in learning can happen in three ways, following three commonly known conceptual models of AI in education: from Creating to Curating in the Domain model, from Seeing to Knowing in the Learner model, and from Prescribing to Recommending in the Pedagogical model.

While there is great potential in using AI to codify well-defined knowledge, it is best used for less complex skills.

From Creating to Curating—AI and the Domain Model

AI can be used to describe particular fields of knowledge in the form of a domain model, which typically consists of a web of ‘knowledge points’ that are related to each other in some way. Domain models might use mathematical concepts like combinatorics and stochastic processes to define and track these ‘knowledge points’, which are the smallest possible conceptual blocks of values, skills or knowledge. Traditionally, these are defined by experts and skilled domain professionals, but a recent 2020 Institute for Adult Learning report suggests that machine learning techniques can collate the appropriate materials and refine the relationships between knowledge points, based on how cohorts of learners perform on assessments.3 This means L&D experts in future could define the curriculum for a field in the form of knowledge points, and let AI determine the strength of relationships between each of its concepts. They can also let AI pick out the materials from a library of resources that will articulate each concept. The new role for human experts could then be to monitor the domain model and curate new concepts to be added.

In L&D, the work of establishing the structure of a curriculum usually entails selecting an appropriate curriculum model, determining appropriate standards for pre- or post-requisites, and developing the content. Most currently available adaptive learning management systems do a good job at teaching learners to acquire well-defined theoretical knowledge and concepts, but are less effective with practical curriculum tasks, such as honing the ability to reason logically or master a skill. So, while there is great potential in using AI to codify well-defined knowledge, it is best used for less complex skills on the lower end of Bloom’s taxonomy (a ranking of task complexity, commonly used in education). A good example of an AI-calibrated domain model might be a job role in procurement contract management, where in-class MCQ quizzes can accurately measure a learner’s competence in identifying supplier risk management strategies.

How might AI refine a domain model? The model must first contain a set of matrices that map varying difficulty of assessment items to behavioural indicators. These matrices, called Q-matrices, can then be easily refined by using a combination of refinement methods. In curriculum areas where reasoning is important, especially causality, recent developments in AI show some promise in using qualitative reasoning techniques to refine domain models where usually ill-defined logical reasoning skills are important. Put simply, AI techniques can help curriculum designers intimately understand the relationships between skills in a competency map, thereby helping them evaluate and review the curriculum better.

Of late, the lowest-hanging fruit seems to be in using AI to populate a map with content and resources. Advances in natural language understanding have given rise to a number of employee engagement platforms such as Microsoft Viva Topics (see box story on Through the Looking-Glass), which have enabled organisations to fuse a repository full of documents into a map of word topics, without the need for any human analyst. This helps people look for pieces of information and understand how they are related. Given a schema of competency frameworks, such systems will be able to automate the tagging of not just articles and learning objects but also existing organisational knowledge to the competency framework.

AI techniques can help curriculum designers intimately understand the relationships between skills in a competency map, thereby helping them evaluate and review the curriculum better.

Through the Looking-Glass: How A Public Officer Might Use the LEARN app in 2025

Alice’s boss asks her to relook at and streamline the current procurement processes in her agency.

Read More

From Seeing to Knowing—AI and the Learner Model

AI has proven able in carrying out specific ‘sensing’ tasks, like identifying a person’s facial expression, tone in text messages, pose, and even speech. However, these perceived emotions may not be entirely true. There is no convincing evidence that facial expressions reveal a person’s feelings, and in fact, a Nature article in 2020 argues that there is “little to no evidence that people can reliably infer someone else’s emotional state from a set of facial movements”.4

Nevertheless, instructors and trainers can use these and a range of other signals to better understand learners’ reactions or feelings accurately in a lesson. What is so unique about this human ability that enables instructors and trainers to ‘read’ their learners? Is there a more objective way to do this than just gut sense?

Will we get to a point where technology can know the learner well, cognitively and socio-emotionally? Currently, the answer is no. But when augmented with a skilled trainer, computer vision technology is surprisingly good at assessing and identifying performance issues in training simulations and situational tests. For instance, in MINDEF’s Murai Urban Training Facility,5 training areas are extensively outfitted with cameras and sensors to collect implicit learner-produced data, enabling high-fidelity after-action review sessions to accurately identify soldiers’ performance gaps, and refine team strategy and tradecraft. In bus driver training, the Land Transport Authority has employed the use of Advanced Driver Assistance Systems to detect fatigue and attention of drivers by tracking eye movements and other telematics of their driving performance.6 This data is used to incentivise good driver habits and for training purposes.

At present, the use cases of AI in L&D stop at diagnosis: AI neither predicts how trainers, instructors, or learners will act, nor prescribes how they should modify their actions. This might be about to change. Further into the future, DL techniques used in gait and pose estimation, for example, could enhance the accuracy of sensor systems. Along with an expert-informed tagging of already-collected multimodal data in more aspects other than those mentioned, we could see a day when learner models can accurately predict when learners need an expert’s assistance, even before they know it, and even inform them with a high degree of certainty there is some likely error that they will make. In other words, tools used by instructors, trainers, or learners could become highly certain of a learner’s knowledge state.

An AI-enabled system can intervene in a timely, contextual manner relevant to each learner.

From Prescribing to Recommending—AI and the Pedagogical Model

AI is unlikely to ever fully replace the instructor, trainer, or coach. With the ever-expanding collection of data in the domain model (containing values, skills, knowledge, and how they relate to each other) and the learner model (containing cognitive and socio-emotional states), an AI-enabled system can intervene in a timely, contextual manner relevant to each learner. While this would be difficult for a trainer or coach to perform for every learner all the time, they are still needed to perform tasks that humans are better at. They can read the affective cues of each learner and tailor the experience to their needs with empathy. For example, a PSD career coach could use personal interest, prior work experience and knowledge data in the LEARN app to provide tailored advice and ask more pointed questions to help the officer come to their own conclusions about their skillsets and options, coherently weaving these, along with appropriate encouragement and tact, into a meaningful career trajectory and narrative.

One of the most promising areas in AI in L&D is the personalisation of individual learning paths using a highly informed and adaptive learning management system. Adaptivity can be thought of in two ways: first, macro-adaptivity, where learners are presented with what activities, knowledge or even groups of peers they are predicted to be able to learn or learn with; and second, micro-adaptivity, where adaptive systems can intervene when necessary—if learners show signs that they might not be able to perform a required competency or skill—by providing short guidance on how to proceed. With a constant check on their prior knowledge and confidence in using the knowledge, adaptive systems could keep learners on the most efficient path to full mastery of content matter.

This idea has in fact been used successfully in general and higher education around the world,7 but it often takes the form of adaptive testing rather than adaptive learning systems. Micro-adaptive systems are also known as Intelligent Tutoring Systems in the L&D literature.8 They are becoming more widespread as domain models in some content areas become much better codified (for example, in many primary and secondar y school mathematics syllabuses, content and assessments have been made adaptive).

Further in the future, learning companions or assistants could provide accurate answers and direct a learner to relevant resources, in their moment of learning need. Such companions could offer work-relevant links to bridge theory and practice, be personable, and create conversations where a trainer or expert cannot. An early example of such companions is Clippit, the paperclip-shaped office assistant that first appeared in Microsoft Office 97 to help users use Office features more effectively. Since then, chatbots like the one seen in Google’s LaMDA (see box story on What We Mean by AI) have become much more capable.

What We Mean by AI

AI, or Artificial Intelligence, is a general term for computer programmes that can sense, reason, adapt and act.

Read More

Epilogue: Some Realities

Much of the vision outlined above relies heavily on large swathes of data, which takes time to amass and prepare for use by ML/DL algorithms. There is also an assumption that systems are developed in line with adequate risk assessment such that they can address the ethical, FATE aspects of AI, although most risks can be mitigated by following the guidelines in the Model AI Governance Framework set out by IMDA governing AI-augmented decision-making.9

CSC has started work on integrating and cleaning existing data so that it can be ready to build a recommender engine. While the first iteration may be just a simple non-personalised filter of sorts for course recommendation, to be deployed on CSC’s public-officerfacing learning programme portal, future iterations may use user-item interaction data to build a range of functionalities. These include Content-based filtering as data becomes cleaner and more standardised, Model/Memory-based Collaborative filtering as more users use the recommendations, or Deep Learning-based models that make the recommendations personalised as more types of data about the public officer becomes available. With a robust recommendation algorithm, informed by other data models (viz. learner and domain models) being built in the future, a truly personalised experience can be delivered to every public officer.

With time, we will get there, either sooner (with responsible and trusted access to user data and efficient AI techniques like transfer learning could shorten the time-to-market), or later (e.g., if the user privacy movement pushback is significant or learner data requires great effort to clean up for use). Even though there is generally high trust in government services, recently reported surveys have concluded, for instance, that senior citizens remain less receptive to having AI interpret medical results.10

It is a leap to say that public officers will react similarly to learning with and from an AI-enabled tool, but for now, the challenge remains to overcome these adjustment hurdles. In order to use such smart systems effectively, change management and professional development for L&D practitioners, learners and other stakeholders, will need to be worked through.


Michael Chew is L&D Specialist at the Learning Futures Group, Civil Service College, where he leads the effort to explore learning analytics methods and concepts. Previously, he worked on the education use case for the National Artificial Intelligence Strategy in the Ministry of Education.

Hoe Wee Meng is Assistant Chief Executive (Corporate) of the Civil Service College. He is also Institute Director of the Institute of Public Sector Leadership. Wee Meng has served in various policy and leadership roles in the Public Service.

Kelvin Tan is Director of Digital Learning Services, Civil Service College, where he leads the team that develops and operates the LEARN ecosystem which serves the learning needs of public officers. Kelvin has helmed technology leadership roles in various public agencies.


  1. GovTech Singapore, “Digital Government Blueprint”, accessed August 18, 2021,
  2. Smart Nation Singapore, “National AI Strategy: The Next Key Frontier of Singapore’s Smart Nation Journey”, accessed August 18, 2021,
  3. H. Bound, S. C. Tan, M. Y. Kan, and X. F. Bi, “Charting the Future of Adult Learning Research Agenda in Singapore: A Consultative Paper by the Subgroup on Innovative Technologies for Adult Learning Research”, March 31, 2020, accessed August 18, 2021,
  4. D. Heaven, “Why Faces Don’t Always Tell the Truth About Feelings”, Nature 578 (2020): 502-504, accessed August 18, 2021,
  5. MINDEF Singapore, “Fact Sheet: Murai Urban Live Firing Facility (MULFAC)”, August 14, 2014, accessed August 18, 2021,
  6. ST Engineering, “Advanced Driver Assistance Systems”, accessed August 18, 2021,
  7. Cengage, “The Benefits of Adaptive Learning Technology”, April 30, 2021, accessed August 18, 2021,; Dorrit Billman and Evan Heit, “Observational Learning from Internal Feedback: A Simulation of an Adaptive Learning Method”, Cognitive Science 12 (1988): 587–625, accessed August 18, 2021,; N. Sharma, I. Doherty, and C. Dong, “Adaptive Learning in Medical Education: The Final Piece of Technology Enhanced Learning?” Ulster Med J. 86, no. 3 (September 2017): 198-200, accessed August 18, 2021,
  8. Arthur C. Graesser, Xiangen Hu, Benjamin D. Nye, and Robert A. Sottilare, “Intelligent Tutoring Systems, Serious Games, and the Generalized Intelligent Framework for Tutoring (GIFT)”, in Using Games and Simulations for Teaching and Assessment, eds. Harold F. O'Neil, Eva L. Baker, and Ray S. Perez (New York: Routledge, 2016), accessed August 18, 2021,
  9. Infocomm Media Development Authority (IMDA) and Personal Data Protection Commission (PDPC), Model Artificial Intelligence Governance Framework Second Edition (2020), accessed August 18, 2021,
  10. S. Begu, “Seniors Less Receptive to Telemedicine and Uncomfortable with AI Interpreting Medical Results: S’pore Survey”, The Straits Times, July 4, 2021,

Back to Ethos homepage