![]() home > publishing > alj > 54.3 > full.text > Information systems evaluation and the search for success: lessons for LIS research |
|||
The Australian Library JournalInformation systems evaluation and the search for success: lessons for LIS researchStuart Ferguson, Philip Hider and Tricia Kelly Manuscript received April 2005 This is a refereed article IntroductionThis paper draws on the growing body of literature in the information systems and technology field to suggest useful measures and methodologies which may have potential for greater use in the LIS sector. The review covers both 'soft' and largely subjective user-orientated approaches, such as measurement of user satisfaction, with 'hard', objective approaches, such as transaction log analysis. It notes that most evaluation methods are designed to measure system 'success' and discusses attempts to define success. One hardly needs to labour the point in this journal that information systems evaluation is a critical activity for many library managers. With the development of the web (over ten years ago) and its organisational equivalent, the intranet, the library and information services (LIS) environment has changed dramatically. Library websites have become complex portals for information resources available to a library's community, both within the physical library itself and elsewhere in the virtual environment. Moreover, in the corporate sector especially, many libraries use their organisations' intranets for the delivery and marketing of library services, and even have a role to play in intranet development. Given the overwhelming scope of the electronic information environment for many end-users, there is an onus on LIS professionals to develop delivery systems that are easily navigated. It is also important for them to develop the means to evaluate their new services and delivery systems. This paper attempts to provide an overview of information systems (IS) evaluation, not just within the LIS community, but also in the information and communication technologies (ICT) sector. One approach taken by our ICT colleagues to the complex issue of IS evaluation is to attempt to assess whether the desired outcomes of system development were achieved successfully. Since the 1970s, there has been considerable research into measuring 'success' and determining the success criteria of information systems. The considerable financial investment by organisations in information systems underlines the importance of success evaluation for both ICT and LIS researchers and practitioners alike (Saarinen 1996: 103). But what is 'success'? The Macquarie Dictionary (1996: 404) provides an objective and somewhat broad definition of success as 'the favourable or prosperous termination of attempts or endeavours'. Applied to LIS research, this can have a myriad of meanings and measures. Success is a subjective concept and its definition can vary depending on who is asked to define it. In the case of a search undertaken by an end-user in an information database, for instance:
Each of these stakeholders is defining success according to his/her view of the world and each is correct. This makes the achievement of universal measures of success all the more challenging. The many well-publicised information systems failures and the paradox of high investment and low productivity have brought the issues of critical success factors and success measurement to the fore (Ballantine et al. 1996: 5) particularly in the ICT research arena since the 1970s. In contrast, 'success research' has not extended far into the LIS field, perhaps because this topic was seen to be too focused on specific computer applications. As ICT and LIS fields of research and practice converge, however, there is an opportunity for library researchers and practitioners to learn from previous and current ICT research and to apply it to the evaluation of LIS products, systems and services. We therefore propose to review systems evaluation as it has been practised in the LIS sector, focusing on the main methodologies employed, before examining the approach taken by our ICT colleagues. System evaluation in the LIS literatureEvaluation of information systems has been a central theme of LIS literature for several decades, with both practitioners and researchers making important contributions. The extensive review by Harter and Hert of information retrieval (IR) system evaluation (1997) remains fairly representative of the methodologies employed by the LIS community, although contributions from practice, generally more user-centric, appear to have increased since then. The review dwells on the traditional 'black box' approach of IR research, characterised by the well-known Cranfield experiments, but highlights the ways in which the user has increasingly been brought into the reckoning. Before outlining the principal methodologies, it is worth noting their range. Different techniques are used for the collection of different kinds of data and, given the wide range of data required to address the many evaluation criteria used, the range of methodologies is considerable. It is also worth noting that, over the years, distinctions between methodologies have become blurred. Quite apart from the combination of different methodologies in specific evaluations, some interesting hybrid methodologies have been devised, bridging the gaps between qualitative and quantitative analyses, naturalistic and laboratory settings, and ultimately the user and the system. Further, real-life 'simulations' of information seeking have been introduced into experimental methodologies based on traditional IR measures. Evaluations in which the end-user articulates his/her experiences with a given system abound. Most commonly, these articulations are derived through questionnaire surveys or interviews. An alternative to these is the focus group, which can encourage more reflective and analytical evaluations, provided a group of the system's stakeholders can be readily identified and the 'group effect' is not thought likely to bias responses. Given that users' perceptions and attitudes are difficult to measure, their feedback might be better used heuristically, helping to identify particular problems and solutions. The most fruitful diagnostic method is perhaps protocol analysis, based on 'talk aloud' sessions which might involve actual information problems, although they more commonly involve simulated questions (see, for instance, Hersh, Pentecost and Hickam 1996). Protocol analysis was already a well-established methodology in information retrieval, but through the recent adoption of usability testing techniques, interest in its use has increased markedly. The popularity of the related methodology, discourse analysis, based on mediated searching, has at the same time declined. Another user-centric approach which simulates real-life makes use of assessment forms based on set tasks. End-users are asked to perform tasks designed to represent typical activities for which the system was designed, and then evaluate the system's performance. Usability testing has spawned a series of assessment instruments for this purpose. Responses may be more detailed and systematic compared with those of a post-session survey of real-life users, but simulations can never perfectly represent what might actually happen. The technique is sometimes considered a starting point for evaluations that are followed up by the application of other methods. The two main methodologies based on system-generated expressions of 'success' are usage statistics and transaction log analysis (TLA), sometimes accompanied by system performance statistics - in fact usage statistics are a basic form of TLA. Although usage can be indicated by user responses, a more reliable method is to acquire the relevant data from the system's log. Unfortunately, particularly with client-server systems, the log rarely provides a detailed account of when, how and by whom the system was used. Moreover, it is not always the case that system usage 'reflects the degree of confidence the users have in the effectiveness of their information systems' (Thong and Yap 1996: 603). TLA has sometimes been criticised for a lack of depth (Peters 1993). Many OPAC studies have employed system loggers, addressing questions such as 'search failure' and use of particular functionalities. Experimental research has also found TLA to be a helpful way of diagnosing interface issues and those of user-system interaction. As interaction between user and system becomes an increasingly central aspect of online information seeking, so 'online monitoring' becomes all the more promising as a methodology, particularly as a supplement to the rather asynchronous measures of traditional IR. As Borgman, Hirsh and Hiller observe, 'the most fundamental change in evaluation goals is the shift in emphasis from the output, or product of the search, to the emphasis on the process of the search itself' (1996: 571). Hider (2005) argues that it is the limitations of the loggers themselves, rather than the methodology, which are holding back TLA, and that more sophisticated analyses can be carried out given richer data from which to work. The 'classic' form of information retrieval evaluation is, of course, the experiment using datasets made up of documents with predetermined relevance judgements - relevance testing. The measures of recall and precision continue to underpin many evaluations of experimental systems, despite the fact that many of the assumptions on which they are based can no longer be claimed as reasonable for most real-life situations. Several serious issues are raised by Harter and Hert (1997). However, traditional IR has contributed greatly to the development of many of today's large-scale systems, such as the search engines on the web, and its vehicles, such as TREC, remain influential. Where more comprehensive evaluations are sought, it is considered better to use different methodologies in combination. Some multi-faceted evaluations are reported in the literature (see, for instance, Hill et al. 2000, on the Alexandria Digital Library). 'Convergent methods' evaluation is convincingly advocated by Buttenfield (1999), who draws together ethnography, cognitive walk-throughs, videorecordings, online surveys, focus groups and TLA. As the usability testing approach gains currency in LIS, so the synthesis of various techniques is likely to increase, given that usability is commonly defined by instruments with extensive sets of measures, demanding a sophisticated combination of methods. Synchronising the different recorders often requires a laboratory set-up, and librarians should seek to secure access to those established for usability research. Calls for more sophisticated analyses have become fairly common. In his work on evaluating multimedia systems, Dunlop (2000) concludes that 'the challenge for interactive evaluation in IR is to connect the two types of evaluation: engine performance, and suitability for end-users'. Dunlop advocates building on 'classic' IR methodology by adding HCI techniques that emphasise process, such as task observation, protocol analysis and walkthroughs. A brief survey[1] of evaluations reported in 2003 and 2004 indicated that the most common methodology employs assessment forms and prescribed tasks. Usage statistics come second, though most other methodologies are represented by at least one study. Most evaluations are making use of only one method, but a minority are applying several. IS evaluation in the ICT literatureIS research refers to that subset of ICT research that focuses on the development, use and impact of ICTs in organisational environments (Myers and Avison 2002: 3). What we are discussing here, therefore, is largely applied research, which, as Frada Burstein puts it, targets 'a specific problem relating to the introduction or functioning of an information system' (2002: 149). Even in the area of theoretical IS research, it can be argued that systems development issues loom large, because theory generally leads to the development of a prototype, in order that the theory 'be tested in the real world to show its validity and to recognise its limitations' (Burstein 2002: 149). Evaluation occurs twice in the 'traditional' structured systems analysis and design approach: first, in the feasibility phase, in which an attempt is made to establish likely impact and costs, and, second, in the form of a post-implementation evaluation, which is an attempt to measure what impact the system actually had (Smithson and Hirsch heim 1998: 160, Serafeimidis 2002:172-3). This approach focuses on issues such as whether the project was delivered on time, whether it was to budget and whether it met the specifications, and ignores other issues, such as what the stakeholders think of both the 'process' and the 'product' (Wateridge 1998: 59). Post-implementation evaluation is also criticised on the grounds that it is generally conducted by system developers as a quick 'pack up and get out' project closure activity (Smithson and Hirschheim 1998: 162). One of the most fundamental criticisms of post-implementation evaluation, which strikes at the whole idea of a systems life cycle, suggests that information systems are evolutionary. While the kind of evaluation associated with structured system analysis and design may be appropriate in cases in which systems are developed with set budgets and time scales, and in which projects, therefore, reach some form of completion, it is increasingly recognised that technologies such as the web and intranets constitute evolutionary systems, for which traditional, measure-orientated evaluation techniques are unsuitable (Patel 2002: 257). A Price Waterhouse survey of 1994 established that cost containment was one of the top three concerns of British IS managers (Smithson and Hirschheim 1998: 160). During the feasibility phase of systems development, therefore, cost-benefit analyses are common (Smithson and Hirschheim 1998: 162), as are other related methods. Broader studies, based on theories of information economics, examine the value of IS development on organisations based on factors such as strategic impact, productivity impact and consumer value, while another set of studies, based on behavioural science, addresses the relationship between IS development and individual satisfaction and decision-making behaviour, and the consequent impact on the organisation (Serafeimidis 2002: 176-178). A more 'humanistic' school of thought argues that technical/organisational perspectives need to be supplemented by what Garrity and Sanders call a 'socio-technical' approach (1998: 7). Work, they suggest, is accomplished in a 'social collaborative way' and researchers, therefore, do well to take into account people's social, cognitive and affective needs (1998: 6-7). If 'socio-technical' factors are ignored, system failures may result, they argue, even in cases in which there are clear technical or organisational benefits, such as an improved system interface (1998: 7-10). There is, therefore, a range of success factors for any information system. Wateridge (1998: 61) identifies six of them.
Some of these such as 'meets budget' and 'meets timescale' are easily measured, while others such as 'achieves purpose', 'meets user requirements' and 'happy users' are not so easily assessed. In their influential research, DeLone and McLean (1992) proposed that a set of six interrelated success factors existed: system quality; information quality; use; user satisfaction; individual impact; and organisational impact. More than a decade later, they revisited their model in order to propose an update that incorporated changes in the IS arena (see Figure 1).
Within the model outlined in Figure 1 above, the six variables are individually important components of success but, in the measurement of the overall success of the information system, they are interrelated. 'System quality' measures the desired characteristics of the system or service (including usability, availability, reliability, adaptability and response time), 'information quality' focuses on content, while 'service quality' refers to the overall support delivered by the service provider (DeLone and McLean 2003: 24-25). Of these components, user satisfaction has emerged strongly in IS research as the key measure of success for information systems and services. User satisfaction as a measure of success'User satisfaction' - defined by Ives, Olson and Baroudi (1983: 785) as 'the extent to which users believe the information system available to them meets their information requirements' - is a complex variable. It is subjective and can vary according to external influences that have nothing to do with the information systems per se, such as whether the person likes his/her job or is having financial or personal problems. Bailey and Pearson have a similar view believing that 'satisfaction in a given situation is the sum of one's feelings or attitudes toward a variety of factors affecting that situation' (1983: 530). As Bruce puts it 'satisfaction with information seeking is a state of mind which represents the composite of a user's material and emotional responses to the information-seeking context'(1998: 541). This state of mind can be influenced by a number of factors. In undertaking a literature-based analysis of the variables affecting ICT end-user satisfaction, Mahmood et al. (2000: 753) believe that these factors can be divided into three major categories: perceived benefits and convenience; user background and involvement; and organisational attitude and support. Moreover, even these can be affected by other factors such as limited time or user attitude, which can vary from day to day, moment to moment. The concept of satisfaction is indeed multi-faceted, but there are many researchers who consider user satisfaction to be an appropriate indicator and valid measure of information system success (DeLone and McLean 1992, Gatian 1994, Jiang et al. 2001, Guimaraes, Yoon and Clevenson 1996, Lin and Shao 2000, Gelderman 1998, Mahmood et al. 2000). DeLone and McLean (1992: 69) outline why they believe user satisfaction has been the most widely used single measure.
Galletta and Lederer (1989: 421) outline why measurement of user satisfaction is important. For the researcher, the ability to investigate generalisable relationships of user satisfaction with other variables, such as training, may provide a better understanding of the IS environment. From a practitioner's perspective, user satisfaction can be harnessed as a feedback mechanism to highlight user perception of strengths and weaknesses. The strengths can be used for recognition and reinforcement of the system or service, while the weaknesses signal areas for improvement. A variety of methods and techniques have been designed to increase the quality of construct measurement in IS research. Klenke (1992) provides an excellent review and critique of user satisfaction and user involvement instruments. The development of user satisfaction measurement instruments began with a measure (thirty-nine items) originally developed by Bailey and Pearson (1983) and followed up by Ives, Olson and Baroudi (1983), Baroudi and Orlikowski (1988), and further developed by Doll and Torkzadeh (1988, 1991) and Doll, Xia and Torkzadeh (1994). The instrument designed by Doll and Torkzadeh (1988) was to measure the satisfaction of users who directly interact with a specific application. Although many of the instruments developed to measure information systems success were developed in the context of PC-based applications (Aladwani and Palvia 2002: 467), two relatively recent studies have adopted the 'end-user computing satisfaction (EUCS) instrument for web-based services with positive results - see Davis and Hantula's study (2001) of internet-based learning and Herring's exploratory study (2001) of attitudes toward use of the web, both employing the Doll and Torkzadeh EUCS instrument. The term 'end-user computing satisfaction' was used to distinguish theirs from previous user satisfaction instruments. 'End-user computing satisfaction is conceptualised as the affective attitude towards a specific computer application by someone who interacts with the application directly' (Doll and Torkzadeh 1988: 261). 'End-user computing' is defined as the process by which users develop applications whereby they have access to computers, data and support systems and the capability to control their own applications and computing needs directly (Igbaria and Nachman 1990). This is a valid description of the way LIS systems and services are currently offered within the online environment. LIS research and success evaluationCan the success measures developed through IS research be utilised by LIS researchers? User satisfaction studies have been used extensively by libraries, but they tend to be general surveys that ask whether library clients are 'happy' with services and systems. Although DeLone and McLean continue to focus their IS Success Model on systems per se, the model could be applied to LIS research in the evaluation of both systems (websites, OPACs, databases and so forth) and services (such as virtual reference and document delivery). Previous success studies within the IS field, encompassing criteria such as system usage, system quality, perceived usefulness and user satisfaction - to name but a few - provide a substantial base on which LIS researchers and practitioners could work. In an exploration of some of the new service paradigms that are emerging, as libraries harness enabling technologies, Moyo (2004) states: 'Initially technology tools were being applied to the same fundamental library service paradigms to make the work more efficient, but now library work itself is beginning to change, with technological innovation leading to design of new services for users'. Where such services are linked intrinsically to technology, LIS researchers have the opportunity to explore the application of success research. Care needs to be taken, however. Success measurement in IS research takes into account a wide range of variables, making identification of success factors problematic. There is, for example, some methodological disagreement over what constitutes a 'dependent' variable, which is part of IS success (information quality, for instance, would fall into this category), as distinct from an 'independent' variable, which can actually be one of the causes of success. Independent variables, such as training, relationship with 'EDP staff', user involvement and top management support - all interesting factors - can be mistaken for dependent variables (Garrity and Sanders 1998: 4, DeLone and McLean 2003:17). Downing (1999: 203-204) also makes the point that user satisfaction measures are intrusive, from the users' perspective, and cumbersome, and may be difficult to justify to employers. It is worth adding too that 'user satisfaction' instruments have been applied within the LIS environment, but have been criticised for a lack of theoretical underpinning, questionable reliability and terminological ambiguity - and have even been misapplied on occasion (Thong and Yap 1996). Ryker, Nath and Henson (1997) have shown how the context of a user's expectations strongly influences his/her satisfaction judgements. Furthermore, a study by Halcoussis et al. (2002) indicates that levels of 'satisfaction' may be chiefly determined by perceived search 'success', rather than by system design in general - which is especially of concern given Hildreth's contention (2001) that perceived performance bears little relationship to actual performance. It is not altogether clear whether these instruments accurately indicate 'success' any more accurately than does system usage. As suggested earlier, however, the combination of different methodologies is sometimes used to obtain more comprehensive evaluations, and there may be a case for using one evaluation technique to check the validity of another. For example, by using one of the variables of success - user satisfaction - to measure attitude, and combining this with TLA to measure behaviour, an overall measure of success of a web-based library service is feasible (Larner 2003), and, indeed, it can be argued that the combination of both approaches provides a balance between attitude (satisfaction) and behaviour (usage) that gives system developers a more robust means of measuring success than reliance on one or other approach. Of course, this might not always be practicable. Many LIS professionals do not have access to sufficient resources, or time, to engage in large-scale evaluations. Finally, as we noted earlier, success is very subjective and varies, depending on the perspective of the individual player. Perhaps the most important thing LIS researchers can learn from IS success research would be that rarely is success or failure as clear as black or white. However, by utilising the existing, validated success measurement tools developed by IS researchers, defining success within the LIS environment may be made a more effective, efficient and perhaps more accurate process. Endnote
ReferencesAladwani, A M and Palvia, P C (2002) 'Developing and validating an instrument for measuring user-perceived web quality' Information and Management 36(6): 467-476. Bailey, J E and Pearson, S W (1983) 'Development of a tool for measuring and analyzing computer user satisfaction' Management Science 29(5): 530-545. Ballantine, J, Bonner, M, Levy, M and Martin, A (1996) 'The 3-D model of information systems success: the search for the dependent variable continues' Information Resources Management Journal 9(4): 5-14. Baroudi, J J and Orlikowski, W J (1988) 'A short-form measure of user information satisfaction: a psychometric evaluation and notes on use' Journal of Management Information Systems 4(4): 44-59. Borgman, C L, Hirsh, S G and Hiller, J (1996) 'Rethinking online monitoring methods for information retrieval systems: from search product to search process' Journal of the American Society for Information Science 47(7): 568-583. Bruce, H (1998) 'User satisfaction with information seeking on the internet' Journal of the American Society for Information Science 49(6): 541-556. Burstein, F (2002) 'System development in information system research' in Research Methods for Students, Academics and Professionals: Information Management and Systems 2nd ed, K Williamson (ed) Centre for Information Studies, Charles Sturt University, Wagga Wagga, NSW, pp 147-58. Buttenfield, B (1999) 'Usability evaluation for digital libraries' Science and Technology Libraries 17(3/4): 39-59. Davis, E S and Hantula, D A (2001) 'The effects of download delay on performance and end-user satisfaction in an Internet tutorial' Computers in Human Behavior 17(3): 249-268. DeLone, W H and McLean, E R (1992) 'Information systems success: the quest for the dependent variable' Information Systems Research 3(1): 60-95. DeLone, W H and McLean, E R (2003) 'The DeLone and McLean model of information systems success: a ten-year update' Journal of Management Information Systems 19(4): 9-30. Doll, W J and Torkzadeh, G (1988) 'The measurement of end-user computing satisfaction' MIS Quarterly 12(2): 259-274. Doll, W J and Torkzadeh, G (1991) 'The measurement of end-user computing satisfaction: theoretical and methodological issues' MIS Quarterly 15(1): 5-10. Doll, W J, Xia, W and Torkzadeh, G (1994) 'A confirmatory factor analysis of the end-user computing satisfaction instrument' MIS Quarterly 18(4): 453-461. Downing, C E (1999) 'System usage behavior as a proxy for user satisfaction: an empirical investigation' Information and Management 35: 203-216. Dunlop, M (2000) 'Reflections on Mira: interactive evaluation in information retrieval' Journal of the American Society for Information Science 51(14): 1269-1274. Galletta, D F and Lederer, A (1989) 'Some cautions on the measurement of user information satisfaction' Decision Sciences 7(3): 419-438. Garrity, E J and Sanders, G L (1998) 'Introduction to information systems success measurement' in Information Systems Success Measurement E J Garrity and G L Sanders (eds), Idea Group, Hershey, PA. Gatian, A W (1994) 'Is user satisfaction a valid measure of system effectiveness?' Information and Management 26(3): 119-131. Gelderman, M (1998) 'The relation between user satisfaction, usage of information systems and performance' Information and Management 34(1): 11-18. Guimaraes, T, Yoon, Y and Clevenson, A (1996) 'Factors important to expert systems success' Information and Management 30(3): 119-130. Halcoussis, D, Halverson, A L, Lowenberg, A D and Lowenberg, S (2002) 'An empirical analysis of Web catalog user experiences' Information Technology and Libraries 21(4): 148-157. Harter, S P and Hert, C A (1997) 'Evaluation of information retrieval systems: approaches, issues, and methods' Annual Review of Information Science and Technology 32: 3-94. Herring, S D (2001) 'Using the World Wide Web for research: are faculty satisfied?' The Journal of Academic Librarianship 27(3): 213-219. Hersh, W R, Pentecost, J and Hickam, D H (1996) 'A task-oriented approach to information retrieval evaluation' Journal of the American Society for Information Science 47(1): 50-56. Hider, P (2005) 'A new generation of transaction logging systems: a new era of transaction log analysis?' Information Online: 12th Exhibition and Conference: Proceedings, 1-3 February, http://conferences.alia.org.au/online2005/papers/c7.pdf (viewed 17 February 2005). Hildreth, C R (2001) 'Accounting for users' inflated assessments of on-line catalogue search performance and usefulness: an experimental study' Information Research 6(2), http://informationr.net/6-2/paper101.html (viewed 17 December 2004). Hill, L L, Carver, L, Larsgaard, M, Dolin, R, Smith, T R, Frew, J and Rae, M (2000) 'Alexandria Digital Library: user evaluation studies and system design' Journal of the American Society for Information Science 51(3): 246-259. Igbaria, M and Nachman, S A (1990) 'Correlates of user satisfaction with end-user computing: an exploratory study' Information and Management 19(2): 73-82. Ives, B, Olson, M H and Baroudi, J J (1983) 'The measurement of user information satisfaction' Communications of the ACM 26(10): 785-793. Jiang, J J, Klein, G, Roan, J and Lin, J T M (2001) 'IS service performance: self-perceptions and user perceptions' Information and Management 38(8): 499-506. Klenke, K (1992) 'Construct measurement in management information systems: a review and critique of user satisfaction and user involvement instruments' INFOR 30(4): 325-348. Larner, P C (2003) 'Evaluating Intranets: User Satisfaction as a Measure of Success' M App Sci Thesis, Charles Sturt University, Wagga Wagga, NSW. Lin, W T and Shao, B B M (2000) 'The relationship between user participation and system success: a simultaneous contingency approach' Information and Management 37(6): 283-295. Macquarie Dictionary (1996) Macquarie Library Pty Ltd, Macquarie University, NSW. Mahmood, M A, Burn, J M, Gemoets, L A and Jacquez, C (2000) 'Variables affecting information technology end-user satisfaction : a meta-analysis of the empirical literature' International Journal of Human-Computer Studies 52(4): 751-771. Moyo, L M (2004) 'Electronic libraries and the emergence of new service paradigms' The Electronic Library 22(3): 220-230. Myers, M D and Avison, D E (2002) 'An introduction to qualitative research in information systems: a reader' in Qualitative research in information systems M D Myers and DE Avison (eds), Sage Publications, London, pp 3-12. Patel, N V (2002) 'Evaluating evolutionary information systems' in Information Systems Evaluation Management W Van Grembergen (ed.), IRM Press, Hershey, PA, pp 167-194. Peters, T A (1993) 'The history and development of transaction log analysis' Library Hi Tech 110: 41-66. Ryker, R E, Nath, R and Henson, J (1997) 'Determinants of computer user expectations and their relationships with user satisfaction: an empirical study' Information Processing and Management 33(4): 529-537. Saarinen, T (1996) 'An expanded instrument for evaluating information system success' Information and Management 31(2): 103-118. Serafeimidis, V (2002) 'A review of research issues in evaluation of information systems' in Information Systems Evaluation Management W Van Grembergen (ed.), IRM Press, Hershey, PA, pp167-194. Smithson, S and Hirschheim, R (1998) 'Analysing information systems evaluation: another look at an old problem' European Journal of Information Systems 7(3): 158-174. Thong, J Y L and Yap, C (1996) 'Information systems effectiveness: a user satisfaction approach' Information Processing and Management 32(5): 601-610. Wateridge, J (1998) 'How can IS/IT projects be measured for success?' International Journal of Project Management 16(1): 59-63. Biographical information Stuart Ferguson is a senior lecturer in Information Management and Librarianship, and has professional experience from Australia, South Africa and Scotland. He has publications in information management, information ethics, knowledge management and literary theory. Other interests include information politics and the knowledge society. He has a PhD in Marxist aesthetics. Philip Hider joined the School of Information Studies at Charles Sturt University after extensive experience as a librarian and a lecturer in the United Kingdom and Singapore. He has recently completed his PhD thesis in information retrieval at City University, London. He is a member of the Australian Committee on Cataloguing. Tricia Kelly is knowledge services manager for CSIRO Sustainable Ecosystems. Her research interests include the theoretical and practical aspects of the relationship between library and information management professionals and knowledge management, and the continual and rapid changes in LIM professionals' roles, which are often driven by an emphasis on knowledge management. |
|