Form design in large organisations involves crafts at the micro level of document design, the customer experience level, and the management and politics level in an organisation. This seminal 1994 paper traverses all these levels. Below is a lightly edited version of the published paper. It also incorporates some content not available in the print version.1 Sless, David. “Public Forms: Designing and Evaluating Forms in Large Organisations,” In Visual Information for Everyday Use: Design and Research Perspectives, edited by H. Zwaga, T. Boersema, and H. C. M. Hoonhout, 135–53. Taylor & Francis, 1999.


The Communication Research Institute of Australia has been researching and improving the design of public forms since the mid eighties, drawing on the pioneering work of Wright and Miller among others. One of the findings from early research on forms is that people make many errors completing forms. Much of our work has been concerned with finding out why these errors occur and then developing ways of reducing them. We have developed some innovative directed forms which change form fillers’ behaviour and thus leads to fewer errors. We have also refined forms design methods which help reduce errors. As part of this work we wrote a grammar of forms and created a new type of software—FormsDesigner—which embodies elements of this grammar so that it can be used by none-designers.

Our work on forms is part of a larger program investigating new ways of thinking about communication. From this emerging perspective, forms are instruments for managing dialogues; the basic unit of analysis is the relationship between a form and its user. Forms are, therefore, central in explaining the communicative relationship between organisations and individuals. Forms are not only ways of collecting information, they are sites of struggle for power and status.

Historical introduction

Exactly 16 years ago, in September 1978, I had the great pleasure of attending the landmark conference here in Holland on the Visual Presentation of Information, organised by Ron Easterby and Harm Zwaga. (In fact, I was there because of the generosity of the late Ron Easterby.)

Two seminal papers at that conference, by Robert Miller and Patricia Wright (1984) were on forms design. Together, they summarised much of the practical wisdom and research in forms design that had accumulated to that time; they pointed to many achievements and also many of the unanswered questions, opportunities and problems in the field. But I was struck most of all by the underlying injunction of both speakers: most forms, they argued, are badly designed, but it need not be so; if we applied our knowledge and skill as researchers and designers, we could do something about it.

I took their injunction seriously, and I am deeply honoured to be asked to report back, as it were, and tell you what we at the Communication Research Institute of Australia (CRIA) have been doing since then to improve the quality of forms design.

Our work is built on the substantial work of others who came before us. Undoubtedly our greatest debt is to Pat Wright in the UK, who during the 70s, along with her colleague Phil Barnard, wrote a number of papers which systematised existing knowledge about language and comprehension on forms (Wright 1975; Wright and Barnard, 1975), explored some of the behavioural factors in forms completion and showed how research methods developed in ergonomics and writing research could usefully be applied to studying forms (Barnard, Wright and Wilcox 1978, 1979; Barnard and Wright, 1976), and addressed questions of quality control in document design (1979) and strategy in forms design (1980, 1984)

In these papers Wright laid out the main issues and methods for achieving good forms design and made an astute assessment of the constraints on good design in large organisations.

Miller’s (1984) paper throughout emphasised the transactional nature of forms and in many ways laid the ground for the model of communication as conversation that our Institute has found so productive in this and other papers we are giving at this conference. As will be clear, much of our own work has drawn on these seminal papers.

In parallel with Wright’s work in the UK was the pioneering work of Alan Siegel at Siegel & Gale in New York (Siegel 1979), the Document Design Centre in Washington (Holland and Redish 1981 ; Redish et al, 1981 ; Rose, 1981), and the Communications Design Center at Carnegie-Mellon University (Janik et al, 1981). Each of these contributed to our understanding of forms design, bringing a variety of disciplinary backgrounds to bear on the problem. Indeed, one of the major insights coming from all the researchers was the interdisciplinary nature of the task. I will have more to say on this later.

No research takes place in a vacuum, and research on forms is no exception: it breathes in the air of consumerism and its reformist ideology. Consumerism emerged as a political force in the developed world in the post-war period. One expression of this rising tide of consumerism was the growth of the plain English movement, seeking to reform bureaucratic and business writing practices that disadvantaged consumers and citizens (see for example publications like Plain English and Simply Stated.) The resulting business and government initiatives provided many opportunities for researchers and designers to collaborate in developing new forms (for example Cutts and Maher, 1981; Waller, 1984). These and other case studies began to deal with some of the complexities of designing documents in large organisations.

As in the UK and USA, in Australia there was mounting pressure for government and business to improve their forms during the early 1980s (Hamilton, 1983), and I was asked to help in this process (Sless, 1983, 1985b). This coincided with our plans to establish a communication research institute, so not surprisingly many of the major projects our Institute undertook were concerned with forms design.

Although our work drew heavily on earlier studies, we brought to it a different perspective. Because of our primary interest in communication, we saw forms as a special type of communication. At the time we established the Institute, the discipline of communication was undergoing a major paradigm shift. I and others at the Institute were exploring new ways of thinking about communication (Penman, 1981; Sless, 1981, 1986) and our field was suffused with debates. For example, in 1983 the Journal of Communication ran a special issue, aptly called Ferment in the Field. Our research on forms design was swept up in this general rethinking and became intimately involved in our development of new theory. I make these points here because to many outsiders in related fields such as psychology or design, communication still seems an unproblematic instrumental process, a means of getting a message from one point to another. To many of us intimately involved in the changes to our discipline, communication has become much more problematic, and more interesting. Our rethinking of communication closely parallels the postmodernist rethinking in philosophy and psychology (for example Bernstein, 1992; Bruner, 1990; Shotter, 1993). And some of us believe that communication may well provide the unifying principles for the emergence of a new type of cross-disciplinary framework (Penman, 1993; Sless, 1991).

Moreover, from our point of view in trying to understand fundamental aspects of communicative processes, forms have become central in explaining the communicative relationship between large organisations and individuals—between producers and consumers, or citizens and the state (Sless, 1988). In our research, forms have therefore taken on great importance and, as you will see in other parts of this paper, our interest in forms extends beyond the reformist agenda in which it is conventionally located and takes in some of the more problematic issues of power, ideology and the status of the individual in our society.

Over the last ten years my colleagues and I have had the privilege and excitement of conducting our research in the great laboratory of our major public and private institutions, whilst helping them improve their forms and helping the public who have to use the forms. The achievements of this work are a tribute to these institutions and the individuals within them who responded, directly and indirectly, to Miller’s and Wright’s injunctions, allowing us to conduct experiments which have in many instances led to important discoveries and to good designs which have helped the public. If this paper gives some hope for others to pursue the very best of practices in design, then much of the credit goes to these Institutions. Chief amongst these have been Apple Computers Australia, the Australian Taxation Office (ATO), the Australian Bureau of Statistics (ABS), Capita Financial Group, Telecom, and the National Roads and Motorists Association (NRMA).

Through a number of large-scale projects for these organisations and others—dealing with taxation, statistics, social security, insurance, and telecommunications—our Institute has developed forms design into what we believe is a mature craft. I will trace some of the major landmarks in that development.

Forms design: a multi-skilled communication craft

From our perspective, forms design is an information design craft, not a science. As I have argued in an essay on ‘What is information design?’ (Sless, 1992a):

The critical differences between a craft and a science is one of purpose and tradition. Information design is first and foremost concerned with solving practical problems within and through a specific cultural and historical context. To that end, information designers draw on the full range of skills, knowledge, and processes available to them, some of which pre-date scientific knowledge by many thousands of years. Scientists can only draw on the stock of replicable research findings and generalisations in their field.

Thus the range of acceptable knowledge and wisdom in forms design extends beyond the research findings of science. Indeed, as many have observed, the skills needed in forms design come from a variety of disciplines and crafts. Wright (1984) suggested that a skilful control of language, typography and research methods are needed, as well as a capacity to interpret relevant research findings. To that we would add skills in design methods, organisational management, organisation and methods, information management, and, perhaps surprisingly, philosophical reasoning. Forms design is a craft which usually involves a team of skilled people rather than a single individual. The Institute’s research has been directed in part to finding satisfactory ways of building such teams and providing them with a coherent and unified methodology. I shall have more to say about this in a later section.

The core theoretical framework we have used derives from postmodern communication theory (Penman, 1992; Sless, 1985a, 1986). Within such theory, public forms are instruments for initiating and managing a dialogue between individuals and organisations. The task of forms design is to effectively create the means by which that dialogue can be managed.

Most importantly, from a methodological point of view, the basic unit of analysis in this framework is the relationship between a form and its user, not the form or its user in isolation from each other. This is a central proposition, the significance of which will become apparent as we proceed.

Form-filling behaviour and evaluation of forms

We now know a great deal about form-filling behaviour—the dialogue which information designers seek to create and manage. But when we started our work little was known of these dialogues. As David Frohlich (1986) observed in his ground-breaking study of these dialogues:

Research workers have tended to view forms as static documents, paying more attention to the manifest content of form material than to the selective use of that material by form-fillers. (p 43)

This static view of forms was in part the result of the kinds of post hoc methods that had been used to evaluate forms, relying on form-fillers’ recollections of form completion rather than observing form-fillers actually completing the form. The sequences of events—the dynamic struggle to make sense of and complete a form—are lost in these post hoc methods.

Frohlich, in contrast, observed people actually completing a form. The form he used was one developed for the DHSS in the UK and previously reported on by Waller (1984). Frohlich, like Holland and Redish (1981), recognised that a form was a type of conversation.

… form-filling … suggests a carry over of familiar skills and expectations typical of natural conversation… [The type of conversation] is that of an interview or interrogation, with the form-filler in the role of respondent. (p 57)

From his observations, Frohlich suggested conversational principles that applied to forms-filling, such as

I work through the questions in the order they appear on the form, I miss out questions that don’t seem to apply to me,

and the Principle of Least Reading Effort

I only read what seems to be necessary to maintain form-filling progress (there are seven principles altogether).

Some of the principles show clearly how form-fillers try to cope when the conversation breaks down, which it does frequently. Breakdown becomes apparent in the large number of errors found on completed forms. Barnard, Wright, and Wilcox (1979) reported on this, and in our own work on major government and insurance forms, we have found error rates as high as 100%, that is, at least one completion error on every form. In one of our published case histories we reported a mean error rate of seven per form (Fisher and Sless, 1990).

I should point out that these are not unusual figures. The unusual aspects of our published data are the facts that measurements of error rates were actually made (still a rare event), and that the organisation concerned allowed us to publish the results. We have other case histories where we have taken such measurements, replicating these results, but the information is too sensitive for the organisation to admit publicly. Regrettably, most organisations do not collect this kind of data and are therefore blissfully ignorant of the problem.

Completion errors on forms can be of four basic types: omission, commission, mistakes and transcription (Sless, 1985b). However, knowing what type of errors have occurred on a form does not tell us why they have occurred. Our experience confirms Frohlich’s finding that to find out why the errors occurred you must observe the form being completed. It is in the dynamics of the transaction between form-filler and form that one is able to see the patterns of behaviour which lead to errors.

From this type of direct observation it is clear that the principle of least reading effort accounts for a substantial proportion of the errors. Frohlich estimated that form-fillers read less than 50% of relevant explanations and instructions on the forms he observed.

We first observed this on a massive scale in a project we began in 1984 to overhaul the major Australian Taxation Office (ATO) public form, Tax Form S. Taxpayers made many errors on the form because they did not follow instructions. The first attempt to overhaul the form had tried to solve the problem by ‘translating’ instructions into plain English. This failed. As the researchers commented:

On the criteria of delivering better quality information … neither [plain English] form is any more than a marginal advance over the old form. Our qualitative assessment … is that there are only marginally fewer errors and omissions in the information supplied by taxpayers on the test forms than was the case when the old form was tested. (Lenehan, Lynton & Bloom, 1985).

Observations showed that form-fillers applied the principle of least reading effort to all forms whether in gobbledegook or plain English, skipping headings, instructions and explanations that they deemed inappropriate. If we were to improve the form, we had to alter form-filling behaviour, that is, restructure the dialogue between form and form-filler. We did this by introducing what we later called the directed form. The seeds of this solution lay in Waller’s work on the DHSS form (1984), where it was partially, and sometimes unsuccessfully, applied, as Frohlich’s observations showed.

Before the 1986 form, a vast amount of both qualitative and quantitative data was collected on tax payer behaviour. This included a pilot study in Western Australia in 1985 with over a quarter of a million taxpayers, probably one of the largest pilot studies ever! The data from this pilot study confirmed Frohlich’s findings, and, further, showed that among the most likely instructions to be missed are the preliminary instructions, those supposed to be read before filling in the form, because form-fillers perceive the task as primarily one of form-filling, not reading. Nothing seemed to induce taxpayers to read any of this preliminary content; they went straight to the answer space and started filling out the form. The effect of this finding on the form’s design is dramatically illustrated in the progressive reduction of the number of words in the preliminary instructions from the ‘plain English’ version, via the WA pilot version, to the final version released to all tax-payers.

Table 1. Number of words in preliminary instructions in successive versions of a tax form

Version Number of words
‘Plain English’ 321
West Australia 107
National release 44

The data from the pilot study also confirmed the benefits of the directed form. This type of form subtly changes form-filling behaviour from one of topic scanning applying the principle of least reading effort—where the form-filler decides which questions to answer on the basis of a minimal reading of captions or headings—to one in which the form-filler could fill in the form only by making a series of decisions based on reading the questions. This is a qualitative change which forces the dialogue to change: instead of skimming over a question and deciding for themselves whether or not it applies to them, form-fillers must read the question in order to answer ‘Yes’ or ‘No’. So the imposition of the ‘Yes/No’ changed people’s reading of questions, but not their reading of instructions.

In the directed form, headings are subordinated to question numbers which reinforce the sequential nature of the task. This also enables the use of strong routeing devices throughout the form. Below is an example of a more recent design showing the routeing device.

Directed form

We have continued to experiment with and refine this type of design through a number of forms. The results are impressive, with the number of forms needing repair sometimes dropping from 100% down to 11%. In one case the total number of errors was reduced by 97.2% (Fisher and Sless, 1990) where of the total number of questions answered, only 2.8% were incorrect, but we achieved this reduction with forms completed by professional insurance agents who knew the field they were working in.

By using a variety of devices it is possible to translate a great many instructions into ‘Yes/No’ questions, but this can result in a very long form, collecting valueless ‘data’. Moreover, there will always be a residual set of instructions that cannot be translated into questions.

Forms used only occasionally by the general public, like Tax Form S, remain error prone. We estimated that even on well-developed forms 12% of all answers would still be wrong (Morehead and Sless, 1988). Many instructions critical to completing a form correctly will not be read, an inevitable consequence of Frohlich’s second principle. Unfortunately, all forms have instructions, and forms which serve complex legal and administrative processes have complex instructions. This type of form therefore has a minimum error rate which is still unacceptably high.

In 1986, the ATO, administering one of the world’s most complex tax laws, was concerned about taxpayers not reading instructions, particularly as it was moving to externalise all its taxation assessment processes leaving the onus for providing accurate information entirely up to the taxpayer. They invited us to examine the relationship between instructions and forms with a view to changing form-filler behaviour, increasing the probability of tax-payers reading instructions.

How could we change form-filling behaviour? In an earlier analysis of this type of graphics problem we showed that user errors could be dramatically reduced if the changes were made to the user’s task, rather than to the graphic material itself. I argued that:

… a more sensitive understanding of the user’s requirements for the task can lead to effectiveness … [using] an ethnography of communication as opposed to a laboratory investigation of communication. (Sless, 1981, p 155)

We decided to explore other solutions that radically changed form-fillers’ instruction-reading tasks. We wanted people to perform two quite separate tasks: to provide information, and to read instructions.

The first designs we developed we called a ‘flap form’. It consisted of a series of flaps containing the instructions and questions attached to a back page that contained answer spaces only. Each successive flap was a column narrower than the last so that each one progressively revealed another column of answer spaces.

Flap form showing flap on left and data collection sheet on right.

Our testing showed that it worked well and increased the probability of form-fillers reading instructions. Also, from the point of view of the administration, the form had the advantage of being a single page full of data only, and it opened the prospect of optical character recognition or imaging as a way of storing and processing the forms. Unfortunately, the design was a logistic nightmare, because it became impossible to match the density of instructions and questions with the reducing area available on the flaps. It required the later questions and explanations to be shorter than the earlier ones. In Tax Form S the match was not too bad, but the annual changes in legislation which required new questions each year made it a precarious solution.

We therefore abandoned the idea of flaps and moved to two totally separated documents: one containing all the questions and explanations, the other answer spaces only. The document to be filled in had no guide to what information was required or where to put it, only numbers that corresponded to the instructions and questions on the other document. There was no way the form-filler could complete this type of form without referring to the instructions.

The questions and explanation document and (underneath) the answer space document

This design has many advantages and is enormously liberating typographically. One of the most difficult challenges facing forms designers is to integrate text and answer spaces in a way that is coherent, yet they have quite different requirements. Separating these functions into two documents greatly simplifies the design and editing.

At the time, we were worried that people would have difficulty tracking between the two documents, but in fact it wasn’t a problem at all: people had no difficulty. They used the answer space to track back to the instructions. Most importantly, there was a radical change in form-filling behaviour: we didn’t just get an improvement in reading of instructions; we got total reading of instructions. Instead of people reading just a little bit and then trying to answer the question, they were reading all the instructions, before going to the answer space.

The minimum error rate on this type of form is about 2% (Morehead and Sless, 1988). The Taxation Office adopted the basic principle of this design, calling it the Tax Pack. Unfortunately, their implementation violates so many other good design practices, they have lost any advantage they might have gained in terms of reduced error rates. We have yet to see a good implementation of this type of design. We suspect many organisations have been inhibited from doing so because of the greatly-disliked Tax Pack. Just the appearance of something similar to the Tax Pack is enough to put people off! (In a recent (2018) project we had an opportunity to try this approach again, with some success.)

Nonetheless, the original type of directed form is now widely used in large public and private Australian institutions, including government departments, statistical and research organisations, motoring organisations, insurance companies, banks, legal aid services, workers compensation schemes, and grant awarding bodies.

Many of these have been highly successful in reducing error rates and improving productivity, but only where they have been developed using a systematic methodology.

Form design tools and methodology

We need to teach people to use forms design tools and methodologies, making the tacit knowledge of the expert practitioner and researcher more widely available. There are far more forms that need to be designed than could possibly be designed by an expert group like ours. And as Paul Stiff (1993) has recently observed:

… a profession which waits for the appearance of another lone genius has still got a lot to learn about method (p 44)

We have always recognised the need to evolve tools and methodologies enabling others to design forms to a high standard. I will trace some of the steps we have gone through to evolve such methodologies.

We began to see a number of clear underlying structures that all forms had in common, a kind of common grammar. We also observed that the methodologies we had developed for working in large organisations followed a consistent pattern. We therefore systematised this knowledge to make it available to a larger group of users. There were, of course, many precedents for us to follow.

Forms grammar and FormsDesigner

In his case history of a DHSS form, Rob Waller (1984) made a comparison between punctuation and typography on forms, as he had done so before in relation to typography in text.

Like punctuation, typography is a blend of system and art, grammar and style. … Unlike punctuation, though, there has been no attempt to incorporate typography into the formal rules of grammar.

Waller suggests that a grammar of forms—systematically developed using a variety of descriptive, analytic, and empirical methods—would be worthwhile for designers and form-fillers, giving both, as he says, ‘a shared knowledge of a system of conventional marks’.

So in 1988 we began developing a forms grammar, investigating and codifying the linguistic and graphic rules used in forms (this project was undertaken by Cathy Appleton, Liz Patz and David Sless. It was funded by Capita Financial Group.) Using linguistics techniques adapted for typography (Norrish, 1987), we surveyed over 2000 forms from many countries, creating a descriptive forms grammar: a comprehensive catalogue of form parts and the rules governing their use. But a descriptive grammar picks up both good and bad practices. Using our expert knowledge of good forms design practice and our data on forms performance, we wrote a prescriptive grammar: one that embodied rules and practices that were found to have worked.

This grammar could be used by an expert information designer, but not by an untrained person. So we decided to incorporate part of the grammar into a computer program in such a way as to make it unobtrusively possible for non-designers to write, draw and edit forms to a high quality. Just as word processors with text wrap-around and proportional spacing have given writers a powerful tool for creating formatted documents, our program was designed to give inexperienced forms designers a powerful tool for creating forms.

Using object-oriented programming and principles derived from expert systems and artificial intelligence we created a software tool embodying the grammar. Limitations of budget and programming tools made it impossible for us to incorporate the full grammar, but sufficiently to create a commercial product—FormsDesigner—which has been well received (Brockman, 1994). With further development work such a tool could have very broad application.

One of the main features of the program is its control of graphic consistency. As in other types of graphic communication, such as computer interface design (Marcus, 1992), consistency is an essential part of good design. However, the normal way of learning about consistency involves extended training in graphic design and typography. Even though we may all wish to see an army of trained information designers streaming out of our universities, there will never be enough such designers to build all the forms that could benefit from good information design. Just as there will always be far more writers than typographers, there will always be far more bureaucrats in need of forms than forms designers. We created FormsDesigner to fill the gap.

FormsDesigner is a prototype of a new type of software which, if successful, will add a new dimension to the designers task. To enable non-designers to achieve a high quality of design for routine documents and interfaces, designers will increasingly called upon to create the rule systems for good design which others can follow. We are already seeing this happen in a number of areas (Sless, 1992; Sless & Wiseman , 1994).

Forms design methods—generalised procedure

Needless to say, to achieve the best results with these software tools, they must be part of a systematic methodology. The other half of our work on forms has been directed at discovering the best types of methodologies to use.

Forms design methods are an example of a more generalised information design methodology. We can see this clearly in the type of diagram Wright adapted from Felker (1980) to describe the forms design process. Similar diagrams are used in symbol design (Foster, 1990), and in computer interface design (Laurel, 1991). This is not surprising since both are concerned with managing a dialogue and dealing with information exchange.

We have elaborated and refined this methodology as a result of experience (Fisher and Sless, 1990).

All these methods derive from the more generalised field of design methods, as developed in such areas as product engineering, systems and architectural design (Jones, 1980).

The essence of the method consists of:

  • defining objectives
  • developing prototypes
  • testing and modifying prototypes repeatedly until optimum performance is reached consistent with the objectives
  • implementing the design.

Despite their generality, these methods are still very rarely used in forms design, which is one reason why so many badly designed forms find their way into use.

Here is a list of the sequence of stages that constitute good forms design methods.

  1. Identify the dominant voices in the organisation’s dialogue.
  2. Decide what information needs to be collected or given.
  3. Find out who are the users of the form.
  4. Find out about the context in which the information is to be collected or given.
  5. Develop a prototype of the form.
  6. Test the form with users to see if it works.
  7. Modify the form in the light of the testing.
  8. Repeat testing and modification at least three times.
  9. Introduce the form on a small pilot scale.
  10. Modify the form on the basis of the results from the pilot.
  11. Introduce the form.
  12. Monitor the form in use, measuring against known benchmarks.

Obviously a great deal could be said about each of these stages, and in later sections I will deal with some of these in detail. But it is worth making a few general observations. First, for long and complex forms, these stages can take a long time. Six months would be a typical time span from beginning to end. Second, not shown in the above list, but critical to the success of the project, is the detailed negotiation with stakeholders which can go on throughout the development of the form. I will discuss this in more detail later.

Testing and benchmarking

Testing and benchmarking play a central role in good forms design as can be seen in the above methodologies, and there have been many reviews of potential testing methods (Wright, 1979; Holland and Redish, 1981; Sless, 1985b).

Even though no large scale comparative evaluation of methods has been published, nor any meta-analysis of the type undertaken in other areas (eg Nielsen & Levy, 1994), we have accumulated a sufficient body of experience to suggest that some testing and benchmarking methods may be better than others in large organisations where there is often a shortage of resources and skills to undertake the work. In such environments, testing methods have to be robust, easy to apply, and highly predictive of the outcome. We have narrowed the range down to two methods: diagnostic testing and error analysis.

Diagnostic testing

A diagnostic test is an investigation of a form in use. It allows us to see first hand the way in which people use forms. Interviews give only unreliable information on what people remember about forms; analysing errors tells us something about what went wrong, but not why. Observations allow us to get closer to the question of why a particular behaviour occurred. We need to know why a particular behaviour occurred before we redesign or modify a form.

Previous research (Sless, 1979) and practical experience in the field have shown that to develop successful designs requires at least three cycles of testing and modification. With complex forms the number of cycles could be much higher. At least three cycles of testing and modification are needed, because firstly, changing a form to eliminate one fault may introduce a new one, and secondly, it is impossible to pick up all the faults in one round of testing and modification.

Error analysis

An error analysis consists of counting and tabulating the number and type of errors that have occurred on a sample of forms.

We have found that an error analysis is the most important quantitative measure of a form’s performance. It is the basic quantitative benchmark against which we can compare the performance of one form with another, and estimate some of the less obvious but major costs in using forms. Good design can reduce the incidence of errors on forms. But we cannot begin to improve the design of a form if we don’t know how well or how badly the form has performed in the past. An error analysis will tell us something about what has happened, but not why.

There are always more errors made in completing forms than are visible through an error analysis. For this reason we never use an error analysis as our only source of information of a form’s performance. But when used in conjunction with diagnostic testing, error analysis can be very powerful.

Form design guidelines

Robert Miller (1984) captured the essence of good forms design when he observed that:

The appearance of the form should not only be inviting, but should enhance the importance of what information the user is to supply. That is its reason for existence. Many forms seem to have contempt for the user’s data … The quality of the paper, typography, printing can appear to belittle the importance of the users’ effort in filling it out, and of the transaction in which the form’s content is an essential part. The quality of these factors is more than a subtle indicator, to the user, of the originator’s attitude towards them and to the importance of the transaction. (p 536)

Miller clearly points the way when he suggests that we must stop creating forms that show contempt for the form-filler’s data. We must replace contempt with respect. Guidelines are not there just to deal with the technical issues of good forms design, they are there to lend a certain dignity to a humble task, to make the ordinary act of completing a form important and valuable. We have written many sets of guidelines for forms designers with Miller’s words in mind.

We usually write for specific organisations because it is difficult to produce a generalised set of guidelines for all types of forms. Our guidelines for the Australian Bureau of Statistics collection forms (ABS, 1988) are probably generalisable to other statistical agencies, but not to other types of organisations. Specialist organisations—banking, insurance etc—have highly specialised requirements. Moreover, the house style of an organisation can have a major impact on choice of fonts, colours and general layout grids.

The nearest we have come to a generalised text is the Guide to good forms (CRIA 1992), which accompanies our FormsDesigner software. The Guide does not cover technical details such as font sizes and positioning of elements, which are built into the software. We are currently preparing a fuller version of the text. Arguably, FormsDesigner, with its built-in rules, is a dynamic set of forms design guidelines.

Are such texts useful? Anecdotal evidence suggests that texts are seldom used. Moreover, guidelines can quickly ossify and become rules. For example, we developed a set of guidelines for one organisation based around a graphics software package that allowed only a limited number of fonts and font sizes. That software now has much more flexible font control, to allow a more sensitive and appropriate use of typography. Unfortunately, six years on, the organisation still regards its original guidelines as the ‘gold standard’, and is highly critical of anything that does not conform to it.

Balanced against the risk of ossification is the need to retain corporate memories of skills and procedures. Many contemporary organisations have got rid of people with information design skills, often without realising their value to the organisation. A set of guidelines is sometimes the only remaining repository of knowledge on which future staff can draw. However we choose to make our knowledge and skill available, whether through guidelines or through software systems, making available our accumulated practical wisdom and research findings is an important part of improving forms design practice.

Language, context, and design issues

Language is one of the most critical elements in a form’s functioning. It is the core of the conversation that takes place between the form-filler and the form. However, it only functions in context, that is, at the moment when it is read and used by the form-filler. Moreover, language is always embedded in typography and surrounded by other graphic devices that make up the form. Language is not only a critical element, it is inseparable from the reading context. We can only know if a particular use of language is appropriate if we see it in this larger context.

No set of guidelines on the use of language can fully anticipate the specificity of particular contexts. We have to observe the language in use to judge whether or not it is appropriate.

This is another way of saying that the basic unit of analysis in our work is the relationship between a form and its user, not the form or its user isolated from each other, and not the language of the form on its own. This is why we emphasise one overarching principle: always test the form diagnostically, and be prepared to change wording as a result of testing.

Wright and Barnard (1975) provided a summary of general reading principles that has been highly influential in nearly every set of guidelines that have been written since. Useful though it is, it gives little insight into the dynamics of reading on forms, and much of the research was drawn from the reading of text on other types of documents where headings provide valuable signposts for the readers. But reading during form-filling is quite different from other reading tasks, and in that context prominent headings can be a distraction rather than a help.

Much of our own work, building on this earlier work, has been concerned with the dynamic aspects of form reading and its effect on form-filling behaviour. The principles that emerge from this are more like principles of good conversation management than good language usage. Sequence, turn taking and voice become the focus, rather than words, sentence structure and hierarchy. If we were to summarise the principles briefly it would be to say that the closer a form can get to everyday conversation, the better it will be.

The graphic issues of forms are no less contextual than language. However, in our experience, form designers find graphics harder to understand and control. Typographers and designers have long known that typography and graphic marks play a role in helping the reader construct meaning, but those without such training are unaware of the role of typography and graphic design, have no developed vocabulary for articulating such meaning, and no capacity to control it. It is perhaps ironic that this most visual aspect of forms design should be so invisible.

Often, in guidelines for forms design, one of the principle tasks is to make the graphic visible and provide the form designer with an elementary vocabulary in typography and graphics. This has been our primary focus. There are clearly too many technical details to mention in this paper, but I would like to mention one small innovation that in my view captures the essence of Miller’s concern for form-fillers: it is the use of the floating white box against a tinted background, used in many of the illustrations in this paper.

This is now a common feature of many forms. When used appropriately these simple graphic features dignify the form-fillers task with importance, enable the form-filler’s to estimate how much work is involved in completing the form and allows them to check their answers easily when the form is complete. They also dignify the task for people processing the form—often themselves the victims of a form’s poor design—by making the form’s data more readily identifiable and legible.

Electronic design systems

Features like the floating white box were not an innovation of the electronic design systems of the last decade, but they are the type of feature that has benefited greatly from this technology, and they serve to illustrate what this new technology has meant for forms design. In 1983, a form with floating boxes took the South Australian Government Printer 10 days from presentation of a marked up design to production of a proof. The cost of typesetting alone for each page was about $300. In 1985, using a Macintosh, the same form was created directly, with no intermediate typesetting or mask cutting. We then printed a laser proof copy in a matter of seconds. The cost of the equipment and the time taken to master it was fully amortised after creating 50 pages.

The great leap forward was not just the rapid and cheap production of final artwork. More importantly, it enabled us to rapidly generate prototype forms and then iteratively test and modify them. I had anticipated the need for this type of methodology in some earlier research (Sless, 1979) but it wasn’t until the advent of the Apple Macintosh that we had a cheap way of putting this methodology into practice.

In the decade since then we have benefited from software and hardware developments that have given us increasingly sophisticated control of both graphics and text. But these have been modest innovations in comparison with the one that occurred with the introduction of this technology in 1984. Rapid prototyping coupled with repeated testing and modification have enabled us to do something new. We can rehearse the conversation at the core of a form as often as necessary in order to make it work well, and we can do so cheaply and quickly. To give you some idea of what that can mean in practice: between 1984 and 1986 we helped the ATO design, test, redesign and retest Tax Form S nineteen times before the form’s national release. (A full account of the type of methodology involved can be found in Fisher & Sless 1990 and in Penman & Sless 1992)

This technology has had a profound effect on our design methods and in our way of thinking about the design process. It has enabled us to integrate traditional design and writing crafts with powerful analytic tools and the more recent scientific crafts of testing. An interesting development has been the way in which the graphic designers, writers, philosophers and psychologists in our team have adapted to each other’s contributions and taken on board both the skills and the challenges associated with this enlarged methodology. We have all learnt a new humility, as we routinely observe people using the information we design, and as we discover how each of our colleagues’ skills contribute to the final design. For me this has been the fulfilment of a dream. In the early eighties when it was nothing more than a hope, I wrote:

The new information age will require many information designers. They will have to be capable of taking information users into account as part of their professional activity. This will require a redefinition of their job, an acknowledgment of their own limitations and an informed and sensitive awareness of the needs of the information user. The last of these can only be achieved by forming better theories about users, developing methods of design research that are not dependent on outside expertise, and acquiring an informed sense of the history of information design, combining all these to create new conventions to meet new communication needs and technologies (Sless, 1985c, p 2).

We have now gone some way towards developing the integrated methods which will enable us to develop guidelines and train the next generation of information designers. But we need to look beyond these aspects of the design process to get a full picture of this emerging craft. We need to turn our attention to performance and to the politics of design beyond that.

Standards of performance

What kinds of standards of performance should we expect from our emerging craft? It is clear from our own practice of this craft that the traditional visual standards of aesthetic judgement employed by graphic designers are no longer a sufficient basis for judging information design. We cannot simply look at a form and say it is well designed on the basis of its appearance alone. We must judge a design by its performance, by what people can and want to do with it. A well designed form should be appealing and inviting to the form filler, it should also be easy to use.

In this shift you can see the full force of our methodological point that the basic unit of analysis is the relationship between a form and its user. When we set standards of performance for forms we are setting standards for managing dialogues, not for printed or electronic artefacts.

Many information-intensive organisations are now concerned about setting quality standards, and achieving best international practices, as has been done successfully in manufacturing industries. However, there are many differences between measuring quality and best practices in manufacturing industries and measuring these things in information intensive activities (Sless 1994). The main difference is that in manufacturing one is measuring the performance of an artefact, in information-intensive activities one is measuring a relationship over time.

The most obvious measures of a form’s performance are error rates and the cost of repairing them. But though error rates offer an easy numerical way of judging a form’s performance, the meaning of the numbers can vary considerably and comparisons of error rates between different forms may be meaningless. For example, when a form is filled in by a client at a counter with the help of a clerk, many errors can be corrected as part of the dialogue between the client and the clerk, and never find their way onto the form. Such dialogues can mask design faults. Therefore, when making a judgment about error rates, we must always look at the dialogues through which the form lives, rather than at marks on paper. Forms that live through different dialogues may not be comparable.

But if these measures are done on a routine basis, they enable you quickly and simply to get a sense of changing performance. We have found that a form’s functional life can be remarkably short, sometimes less than six months. This is not at all surprising if one thinks of a form as a conversation. Conversations, by their nature, are changing all the time. In ordinary discourse they are constantly modified to keep pace with the changed relations and circumstances. Forms’ functionality deteriorates because the conversations of which they are a part change, but the forms themselves do not.

One useful measure is the time someone takes to complete a form. For example, in our diagnostic testing on two comparably complex forms—Tax Form S and the AUSTUDY form—we observed that it took people about 30 minutes to complete each form. In both cases, after about 20 minutes, in both cases, form-fillers started to show signs of tiredness and lack of concentration. Error rates in the later third of both these forms tended to be higher than other parts of the form. We do not know whether this rise in errors was due to form-filler fatigue or the fact that some of the questions in this later third were particularly complex, but we suspect there to be some correlation between the number of questions on the form and the error rates we measured.

This is a also useful measure as it tells us something about the burden we are placing on the form-filler. This is a hugely-neglected area where real costs are often hidden. Often organisations will take account of their own internal costs but fail to take account of the costs borne by the form-filler. In Australia, for example, most of the cost of processing tax returns is borne by the public. The Tax Pack has become a monster. Far from being a vehicle of enlightened government, it has become the means by which the Australian Taxation Office (ATO) has externalised its administration, placing the burden of tax assessment on the citizen. Our own estimates suggest that for every cent spent by the ATO on tax collection, citizens are spending up to nine cents extra in complying with the taxation law, much of this extra being taken up by professional fees to accountants. Well over 60% of salary and wage earners refuse to use the Tax Pack and get a professional to complete their tax returns for them.

Frequently-used but totally misleading figures on forms performance are printing costs or document size. In the USA, where there has been a ‘paperwork reduction’ policy, some have interpreted this to mean a reduction in the size of the pieces of paper rather than a reduction in the work associated with processing the forms. Printing and paper bills may go down, but the work, which is by far the largest cost, doesn’t.

Improved forms using the methods described above—apart from our directed data sheet—are usually longer and can easily occupy twice as many pages as their unimproved counterparts while asking the same number of questions. But they are less costly to repair and no more costly to process. In fact our data suggests that they may be faster to process. Below are some comparative figures of the time taken to process an old design of a motor vehicle insurance form of 2 A4 pages, and a new design asking for the same data of 7 pages (Penman 1990).

Staff members keyed in five sets of data on the old form and the same five data sets on the new form.

Table 2: Improvements in processing time for new forms 7 pages long, versus old forms 2 pages long, both collecting the same data.

Data Sets Average difference. (secs)

First set

-87.6 secs

Second set

-60.2 secs

Third set

-37.6 secs

Fourth set

-19.8 secs

Fifth set

+31.4 secs

The negative sign indicates the new form took longer to process than the old form. The positive sign indicates the new form was processed faster. With each data set processed, staff became progressively faster on the new form and by round five the new form was being processed faster than the current form. These figures are consistent with our findings in other projects where the data has remained confidential.

This type of measurement, despite its numerical quality, is not neutral. We took these measurements in order to demonstrate an argument. There is nothing clean or scientific about such measures, they are part of the ongoing debates within any organisation about costs and benefits, improvements in productivity and profitability, and changes in relations with clients and customers. They are part of the politics of design. And it is therefore fitting that I now turn in the final section of this paper to deal with the political issues of forms design.

Managing the design process

Forms in an organisational context

Forms are the unloved beasts of burden in the information society; they fetch and carry the raw material—information—on which organisations depend. Without forms most organisations would grind to a halt. Forms are not, as Waller (1984) suggests:

… an unfortunate side effect of the state’s involvement in the lives of its citizens and businesses. (p 36)

They are the principle means by which the state and businesses conduct their relationships with people, whether as citizens or consumers. Forms are the core of that relationship. The fact that forms are badly designed tells us directly about the quality of that relationship. It is messy, unfair, and, in the main, serves the interests of the state or business, not the citizen or consumer.

The main reason why forms have been improved over the last twenty years is because of the rise of consumer advocacy and consensus politics, and because managers suspect that if they improved the quality of the forms their organisations will be more efficient and more profitable.

We are fortunate to live at a moment when equity and efficiency seem to go hand-in-hand. It has not always been so, and it may not be so for long. We must take advantage of it while we can. We should always bear in mind that if there were easier ways of saving money or making profits, contemporary support for good information design would fade. If we did not live in a consumer society, and our managers did not depend on the support of those they manage, public graphics and information design would very quickly become a marginal interest.

I make these points not to undermine our collective efforts with cynicism; there is important work to be done in giving the prosaic aspects of our lives dignity and respect, as Miller suggested. I make these points to give you an insight into the fragile political climate that sustains our efforts, a climate we must be prepared to fight for. We need to be advocates as well as researchers and designers.

Good form design does not come naturally to large organisations. As Waller (1984) observed:

Although there have been substantial advances in research on forms and related issues, experience shows that good form design does not just result from the correct application of guidelines about individual problems. (p 36)

If we are to leave a lasting mark on our institutional structures then somehow we have to naturalise good practices within these organisations. We must make it easier to design forms well than badly. This is the great challenge facing our field in the future.

I would not wish to understate or minimise the challenge. Forms are integral to organisational procedures and are subject to multiple control by a large number of interests. Without understanding this organisational context and developing strategies for dealing with it, most attempts to improve forms fail.

 Negotiations, dominating voices and politics

When we at the Institute estimate time on forms design projects we usually go through each of the tasks counting up the hours we know from experience each task will take. Then we double the figure. The extra time, about 50% of the project time, is devoted to politics and negotiation with dominant voices in an organisation. Nothing of what follows is easy or simply a technical exercise, and there is no guarantee of success.

Imagine six or more people, each with a totally different interest and a belief that their interest is more important than anyone else’s, separately engaging you in a conversation about one subject. There is a very good chance that they will each give a different account of their conversation with you. How would you bring all these conversations together? This in essence is the problem you face with designing forms in an organisation. Forms are a part of a type of conversation. But each interest within an organisation sees that conversation in a different way.

We have found it particularly useful when dealing with this organisational context to use the Logic of Positions (Sless 1986). Put simply, the logic enables us to deal with the fact that what you see or do not see of any communicative phenomenon depends on your position within the communicative landscape, and on your relationship with that communicative phenomenon. Just as each person in a landscape can see some things but not others, depending on where they are standing, so, by analogy, in communicative environments we all engage in conversations from our own point of view and with our own interests in mind. We see the texts we write and read from our own unique position in the communicative environment.

But though we cannot see the communicative environment from somebody else’s position, we try to imagine their communicative activity. For example, administrators may not be able to see the forms they create from the form-filler’s point of view. But they create an image of that form-filler—what I call an inferred reader—to fill that hidden part of the landscape. These shadowy images—inferred readers and authors—inhabit all our communicative experience in a powerful way. Often they are inseparable from our opinions about a particular text. If you listen carefully to conversations you will quickly learn to spot these shadowy figures. Below is a typical example from a senior lawyer in a bank:

You can’t afford to separate the two ideas in that paragraph with a full stop. It would encourage people to ignore the second clause, which tends to qualify the first. It might just possibly lead to misunderstanding. (Westpac 1987 p 5)

Clearly the lawyer is not offering us a legal opinion. He is offering us an opinion about an inferred reader, one not based on any detailed knowledge about human reading behaviour. This is part of what we have to deal with when designing forms or any other texts in an organisational context.

The first task then in managing the design process is to identify all the separate voices in the conversation, and the inferences they make about other users of the form, and, as in ordinary conversations, some people will be dominant.

Typically in large contemporary organisations the dominant voices represent the dominant interests within the organisation. In an insurance company, for instance, these will be underwriters, lawyers, agents, investment managers, marketing managers, administrators, and information system managers. Each will come with a variety of inferred readers; they will each tell you how other stakeholders use the forms; and they will all give you different accounts of how the company’s customers use the forms. To make matters worse, their inferred readers, about which they hold strong opinions, can change in unpredictable ways. And because it is not part of their professional knowledge, they can be extremely undisciplined, erratic and contradictory in their opinions. What they tell you will in part depend on your position, or rather what they think your position is. You are not a neutral observer. You are, like them, in the landscape and a participant in the conversation.

This is the raw material you have to work with. Your own position relative to these stakeholders is critical to your success. If you have no power, you will fail. You have an additional problem—if you take up the injunction which runs through our tradition of information design, you are there to represent the interests of a formally unrepresented constituency: the customer, client or citizen, what Macdonald-Ross and Waller (1976) described (after Neurath) as a Transformer.

The difficulty facing a Transformer is as much institutional as it is political. There is no institutional definition of this role in most organisations, and self-proclaimed crusaders who insist on defining their own job are seldom welcome. Moreover, giving a previously unrepresented constituency some say in the decision-making process involves at the very least a redistribution of power and control. Anyone involved in organisations will know that power and control are things that people fight over and do not relinquish easily.

This then is the reality of design in a large organisation. But it is not entirely depressing. Because the inferred readers of dominant voices are changeable, you can have a powerful effect in shaping them, by providing the dominant voices with a glimpse of the one thing they do not have: direct experience of actual form-fillers. You can bring them back selected news and evidence from the front line, as it were—at the interface between the organisation and its public. This can be very potent, particularly if the evidence is good. Error rates are very useful in this regard. Many organisations of our time claim to be ‘customer driven’. Therefore, news from the driving seat is of more than ordinary interest.

You must also remember that dominant voices do not have to agree with each other. They only have to agree that your design meets their needs. If you handle your negotiations with dominant voices on a one-to-one basis, rather than in committee, this allows you considerable scope for satisfying the multiple interests that converge on a form.

Finally, before-and-after data on a form’s performance can be extremely important in persuading people to continue allowing you to improve their forms. But to succeed, you have to insist on measurement at both ends of the process. If you tell people that you have improved their forms but you have no evidence to prove it, what does ‘improvement’ mean?

Forms as instruments of social control

It is impossible to consider forms in any depth without acknowledging that they are instruments of social control. Forms constitute one of the major unobtrusive methods by which bureaucratic societies exercise control over citizens and consumers.

Issues of social control have been much debated in our western culture. In recent times that debate has focused on the role cultural artefacts play in social control. In the 1960s, the neo-Marxist critique of western culture took its inspiration from Marx’s dominant ideology thesis:

The ideas of the ruling class are, in every age, the ruling ideas: ie. the class which is the dominant material force in society is at the same time its dominant intellectual force. The class which has the means of material production at its disposal, has control at the same time over the means of mental production… (Marx, The German Ideology, 1845-6, pp. 35-7)

This became overlaid with Michel Foucault’s critique of individualism. Foucault asserted that we are defined as subjects in and through the dominant discourses in our society. We are, as it were, already inscribed in the texts we read.

Most of the cultural critics who have followed in this tradition have pointed to the objects of popular culture, literature and art as the coercive apparatus of the state and big business capitalism. This has always struck me as odd. In comparison with the coercive power of forms, popular culture and the arts seem the most resistible of artefacts. I can switch off the television, refuse to open a book and never visit the art gallery, but I cannot turn away from the car insurance form, the income tax form, or the census. The consequences of not watching television are trivial compared to the consequences of not having car insurance. The sensation of being inscribed in a text of somebody else’s making is not very strong when it comes to television, yet the feeling is all too real and palpable when completing a form. (For a detailed exploration of this issue see Forms of Control)

Perhaps one of the clearest symptoms of the power exercised by forms is the fact that they are objects of nervous humour. I have yet to see a newspaper article on forms, or be interviewed by a reporter about them, without some weak joke being made or implied. Yet forms are very unfunny things.

I suggest to you that we need a cultural critique of the form. What type of subject do our bureaucracies construct in the forms we have to fill in? Can forms in our type of society ever be artefacts that we associate with dignity and respect for ordinary people, as Miller suggests they should be? These are, of course, not questions about forms per se, but about the whole nature of the relationship between citizen and state, business and consumers. As we help our large organisations improve the quality of their forms, we need to continually ask ourselves whether we are handmaidens to a system that helps us or coerces us. Do we reveal the hand that cares or disguise the fist that controls?


Australian Bureau of Statistics. (1988) Forms Development Procedures and Design Standards. Canberra: Australian Bureau of Statistics.

Barnard, P., and Wright, P. (1976). The effects of spaced character formats on the production and legibility of handwritten names, Ergonomics,19, 81-92.

Barnard, P., Wright, P. and Wilcox, P. (1978). The effects of spatial constraints on the legibility of handwritten alphanumeric codes, Ergonomics, 21, 73-78.

Barnard. P., Wright, P. and Wilcox, P. (1979). Effects of response instructions and question style on the ease of completing forms, Journal of Occupational Psychology, 52, 209-226.

Bernstein, R. J. (1992). The new constellation. Cambridge: Cambridge University Press

Brockman R. J. (1994), Software review: FormsDesigner, Information design Journal 7(2) 171-174.

Bruner, (1990), Acts of Meaning. Cambridge, Massachusetts: Harvard University Press.

Communication Research Institute of Australia (1992) Guide to good forms. Canberra: Communication Research Institute of Australia

Cutts, M., and Maher, C. (1981). Simplifying DHSS forms and letters. Information design journal, 2, 28-32.

Felker, D. B. (1980). Instructional Research. In: D. B. Felker (ed), Document Design: A Review of the Relevant Research.(pp.) American Institutes for Research Technical Report 75002-4/80: Washington, DC.

Fisher, P., and Sless, D. (1990). Information design methods and productivity in the insurance industry. Information design journal, 6(2), 108-29.

Foster, J. J. (1990) Standardizing public information symbols: proposals for a simpler procedure. Information design journal, 6(2) 161-168.

Frohlich, D. M. (1986). On the organisation of form-filling behaviour, Information design journal, 5 (1), 43-59.

Hamilton, S. (1983), “Just fill in this form ……” Are we asking our clients to do the impossible? Canberra: Department of Social Security.

Holland, V. M., and Redish, J. C. (1981). Strategies for using forms and other public documents. Paper presented at Georgetown University Round Table on Languages and Linguistics, Document Design Centre, American Institutes for Research.

Janik, C. Swaney, J. H. Bond, S. J. & Hayes, J. R. (1981), Informed consent: reality or illusion. Information design journal, 2 (3 & 4), 197-207.

Jones, C. J. (1980). Design Methods : seeds of human futures. London: Wiley.

Laurel, B. (1991). Computers as Theatre. Reading, Mass: Addison-Wesley.

Lenehan, Lynton and Bloom (1985), Market testing of two draft designs for a new tax form ‘S’: A qualitative/quantitative report prepared for the Australian Taxation Office, Canberra.

Macdonald-Ross. M. and Waller, R. (1976) The Transformer, Penrose Annual. 69, 141-152.

Marcus, A. (1992) Graphic Design for Electronic Documents and User Interfaces. New York: ACM Press.

Miller, R. (1984). Transaction structures and format in form design. In: H. Zwaga., and R. Easterby (eds), Information Design (pp. 529-544). Chichester: John Wiley & Sons.

Morehead and Sless (1988) Integrating Instructions with Australian Taxation Office Forms: Final Report. Canberra: Communication Research Institute of Australia.

Nielsen, J. and Levy, J. (1994) Measuring Usability: Preference vs. Performance. Communications of the ACM 37(4) 66-75.

Norrish, P. (1987). The graphic translatability of text. London: Department of Typography and Graphic Communication, University of Reading.

Penman, R. (1981). Interpersonal communication: Competence and co-ordination, Australian Scan, 9&10, 31-33.p. 3

Penman, R. (1990) New car insurance proposal form: Rationalé & evidence. Report to: NRMA Insurance Ltd. Canberra: Communication Research Institute of Australia.

Penman, R. (1992). Good theory and good practice. Communication Theory, 2, 234-250.

Penman, Robyn (1993). Conversation is the common theme: understanding talk and text. Australian Journal of Communication, 20 (3) 30–43.

Penman, R. & Sless, D. (Editors) (1992) Designing information for people. Canberra: Communication Research Press.

Redish, J., Felker, D., and Rose, A., (1981). Evaluating the effects of document design principles, Information design journal, 2(3/4), 236-43.

Rose, A. M. (1981) Problems in public documents, Information design journal, 2(3/4), 179-196.

Siegel, A. (1979). Fighting business gobbledygook… How to say it in plain English, Management Review 68(11) 1979

Shotter (1993). Cultural Politics of Everyday Life. Toronto: University of Toronto Press

Sless, D. (1979) Image design and modification: an experimental project in transforming. Information design journal, 1, (2), 74–80.

Sless, D. (1981). Learning and Visual Communication. London: Croom Helm.

Sless, D. (1983). The Design and Use of forms by Government. In: T. J. Smith., G. Osborne., and R. Penman (eds), Communication and Government. Canberra: Canberra CAE.

Sless, D. (1985a). Communication and the limits of knowledge, Prometheus, 3(1), 110-118.

Sless, D. (1985b). Form Evaluation: Some Simple Methods. Canberra: Information Co-ordination Branch, Department of Sport Recreation and Tourism.

Sless, D. (1985c) Informing information designers, icographic (11) 6, 2-3

Sless, D. (1986). In Search of Semiotics. London: Croom Helm.

Sless, D. (1988). Forms of Control, Australian Journal of Communication, 14, 57-69.

Sless, D (1991). Communication and certainty. Australian Journal of Communication, 18 (3), 19–31.

Sless, D. (1992a). What is information design?, In Penman R. & Sless, D. (Eds).Designing information for people. Canberra: Communication Research Press, 1–16.

Sless, D. (1992b). Designing Documents that work. Xploration, III(1), 14-16.

Sless, D. (1994) International best practice and the citizen, Communication News 7(2), 3 – 5.

Sless, D. and Wiseman, R. (1994) Writing about medicines for people: usability guidelines for consumer product information. Canberra, Department of Health and Human Services.

Stiff, P. (1993). Graphic design, MetaDesign, and information design, Information design Journal 7, (1) 41-46.

Waller, R. (1984). Designing a Government form: a case study, Information design journal, 4, 36-57.

Westpac Banking Corporation, ‘Legalese – No Breeze’, Changes May 1987 p5.

Wright, P. (1975). Forms of complaint. New Behaviour, 2, 206-209.

Wright, P. (1979). The quality control of document design. Information design journal, 1, 33-42.

Wright, P. (1980). Strategy and tactics in the design of forms. Visible Language, 15, 151-193.

Wright, P. (1984) Informed design for forms. In: H. Zwaga., and R. Easterby (eds), Information Design (pp. 545-577). Chichester: John Wiley & Sons.

Wright, P., and Barnard, P. (1975). Just fill out this form – a review for designers. Applied Ergonomics, 6, 213-220.

Zwaga, H., and Easterby, R. (1984) Developing effective symbols for public information. In H. Zwaga and R. Easterby (ed), Information Design (277-298). Chichester: John Wiley.