Praise & Criticism: ADDIE Model

Praise & Criticism: ADDIE Model

This article is an extension to our "In a Nutshell" series. To read our original overview of the ADDIE Model click here.

White

Praise: The ADDIE model is at the bedrock of instructional design and has been the ‘go-to’ model for generations of learning professionals. ADDIE is valuable because it identifies the most important high-level steps that are relevant for designing any type of planned training experience. Different or modified ISD models are still likely to resemble the basic steps in the ADDIE model, even if they are approached in a different way.

Process models like ADDIE are often the most user-friendly because, like a recipe book, they provide a simple, logical, step-by-step set of instructions that can easily be followed. While ADDIE is not a set of specific instructions to be followed per se, the great thing about ADDIE is that it has been used as the basis for creating instructions. Because ADDIE models have been around for so many years (in one form or another), a huge number of practitioner descriptions are available. These accounts can provide added depth and dimension, with guidelines and advice about what should be included in each stage, including what’s worked and what hasn’t worked based on first-hand experience, linked to a diverse range of learning environments.

ADDIE provides a central, adaptable roadmap for making training more effective and efficient. Broadness and flexibility are arguably the model’s greatest strengths. It can be used at an individual or group level, for online and classroom training, and it has the potential to be integrated with a wide range of other training models and tools (for example, The Kirkpatrick Four Levels in the Evaluation stage, and Cathy Moore’s Action Mapping process in the Analysis and Design stages).

One of the most underappreciated aspects of ADDIE is that it overlaps with all other popular instructional design models such as the Successive Approximation Model (SAM)—an approach created by Michael Allen and promoted by AT D (formerly ASTD)—and other similar models based on Agile development and rapid application development (RA D). All popular instructional design models draw from and overlap with the ADDIE steps whether their respective authors state it or not. Where problems with the model are identified this is usually more to do with interpretations based on specific descriptions of the model, as noted by Riecker (2012): ‘… any identified deficiencies (Mr. Allen identifies seven of them) are, in my humble opinion, errors in use rather than the model itself. One must know how to use the model to be effective. [Criticising the tool] is like saying that a computer is broken because the user doesn’t know how to operate it’.

ADDIE-related models often focus on the importance of up-front planning and methodical structure, which can be seen as one of the key advantages of using an ADDIE approach. There’s often a temptation among instructional designers to get straight into the Development stage because the creative process is usually the most interesting part and IDs want to show their work to the client as quickly as possible. By placing the Analysis stage at the front, IDs are asked to stop and consider what the client is really trying to achieve, the characteristics of the audience, and to get clear on the situational factors that will influence the program’s success. For example, the ID might begin building an eLearning platform for several regional business units only to discover in the rollout phase that 30% of the participants don’t have the computer software capable of supporting the platform. The ADDIE model does not necessarily require a huge amount of time and effort spent conducting research in the Analysis phase, but it does suggest IDs take into account one of Stephen Covey’s most important habits: ‘Seek First to Understand’. Another model that can be plugged into ADDIE at this stage is the SMART Goal-Setting Model.

In the end, effectiveness is a function of effort and practice. Depending on the time and energy that people are willing to invest in understanding the different ways that practitioners use ADDIE, the model can be as flexible (or cumbersome) as required. This point is perhaps best summed up by Evans (2009): ‘ADDIE can be applied in minutes, days or months. It’s ponderous if you make it so, or nimble… If it’s explained and applied effectively, I’ve found that clients become advocates for it. If not, they have little or no patience for it’.

White

Criticism: Like other models in this series, the ADDIE model presents difficulties for critical evaluation because it doesn’t have any definitive account to use as the basis for critique. The most obvious criticism of ADDIE, therefore, is its sheer broadness and vagueness, arising from the absence of author ownership. Models that are non-definitive, in this sense, leave practitioners with a very particular set of research and application problems that result from a multiplicity of interpretations.

The most significant criticisms of ADDIE generally centre on the earlier, ‘waterfall’ models. In this context, some of the most common criticisms of ADDIE include the time and expense required to complete all the steps (particularly the upfront research in the Analysis stage); insufficient consideration given to prototyping, iteration, formative evaluation, client collaboration and testing; and its general lack of applicability to modern, fast-paced, technologically driven, time- and cost-sensitive business environments.

These types of criticisms are typified by the following practitioner account: ‘I challenge the usefulness of the traditional ADDIE model in today’s training world. Based on conversations with colleagues and my own experiences, very few learning organisations have the time, resources, and flexibility to take a waterfall approach to developing content’ (Tyson, 2014).

People quickly started to identify flaws in the original ADDIE design and, shortly after its release into the market, refinements began to be made. The Dick and Carey Model introduced in 1978, for example, is a well known modified ADDIE approach, which emphasises formative and summative evaluation, and many other instructional design models have since followed its lead.

Michael Allen, author of Leaving ADDIE for SAM (2012) and the creator of the Successive Approximation Model (SAM), identifies seven weaknesses of ADDIE-usage including unrealistically comprehensive up-front analysis, storyboards are ineffective tools for creating designs, detailed processes stifle creativity, and failure to focus on identifying behavioural changes (Allen, 2007). Allen also goes on to say: ‘Perhaps the biggest weakness of the [ADDIE] model is that it assumes that you can know all of the requirements before you develop the content. From practical experience we realise that the design process (developing and experimenting with the content) actually shapes the final design’. However, as noted above, these criticisms are linked more to faults in ADDIE’s usage than inherent flaws in the model’s design.

In spite of the advent of dynamic ADDIE models that can be applied to rapid eLearning development, one of the biggest problems with the term ‘ADDIE’ is that it has earned a negative reputation in many pockets of the training and development community. Allen discusses the reasons for introducing SAM in the context of dissatisfaction with the ‘old’ ADDIE model in a video for ATD: ‘Now, of course, there have been many modifications to ADDIE over time to try to deal with that, and we’re just building on those ideas. But if you’re doing something differently than is described in the ADDIE process then we need to be sure to give it another name so we don’t say we’re doing ADDIE and implying one thing when we’re doing another’ (Allen, 2012).

One of the criticisms of ADDIE descriptions is that they often fail to address the need for prototyping. ‘Rapid prototyping’, a subset of rapid application development (RA D), is sometimes suggested as an alternative approach to ADDIE while others see the method as complementary. Rapid prototyping attempts to quickly assemble a basic version of a product that can be tested with an audience or client. In more dynamic descriptions of the ADDIE model this is often incorporated into the Design stage. However, in an attempt to incorporate more Agile- and Rapid-friendly approaches, including rapid prototyping, many ADDIE descriptions have created overlaps between Design and Development. This has sometimes led to conceptual confusion about what’s involved in each stage.

Furthermore, in practice, the first three stages of ADDIE often get merged together when there is pressure to produce a program quickly (i.e., most business environments). As noted by Deale (2014), ‘It has been my experience that Analysis, Design & Development get jumbled together’. The desire to move straight into the Development and Implementation stages is also emphasised by Schulz (2009): ‘The rise of rapid development tools has caused our focus to shift from ADdiE to adDIe’. ADDIE is often associated with taking a more methodical, structured (‘clunky’) approach to content creation that doesn’t work as well for environments that emphasise speed. Plaster (2014) provides an analogy to explain how ADDIE and SAM are generally perceived: ‘If you can describe ADDIE as “ready, aim, aim, aim, fire,” then SAM is “ready, fire, aim”.’

Related to prototyping, another criticism of ADDIE descriptions is that they often fail to address the need for testing. While some practitioners might argue that continuous testing can (and should) be undertaken throughout the Design and Development stages, others would prefer to see this activity addressed as a new stage altogether: ‘I suppose what I find missing from the [ADDIE] method is a TESTING component—or a dry-run after Development. This is traditionally lumped into the Development cycle, but I prefer to see it called out’ (Ferriman, 2013).

A final criticism or caution for L&D practitioners who use the ADDIE model—related to a wider debate about the future of instructional design as a whole—is its widespread use as a top-down method (i.e., learning is determined and curated by the designer), which can lead to a parochial focus on ‘scheduled-event-training’. This complaint is also part of the broader discussion in the 70:20:10 Learning Model, in the sense that formal learning is only a small part of the way in which learning happens in the workplace. Hart (2015) echoes this sentiment: ‘… in the workplace we don’t just learn from instruction, i.e., by being taught in classroom training or online courses… L&D departments who think workplace learning is all about instructional design will paint themselves into the training corner as others in the organisation take on [new roles] to support the real learning that happens in their organisation’.

White

Comments

There are no comments posted yet. Be the first!

Post a Comment

loading...