Tuesday, 8 September 2020

Professor, Mike Cooley, Architect as bee, academic genius dies

Professor Mike Cooley has just died. His unique work on socially usfeul production, the Lucas plan, converting military production to societally needed manufacturing and so on, has been inspirational for a whole generation of alternative technologists. We have lost a true giant. https://blog.p2pfoundation.net/when-the-workers-nearly-took-control-five-lessons-from-the-lucas-plan/2018/05/14 When the workers nearly took control: five lessons from the Lucas Plan | P2P Foundation Lesson 1: Find common ground. A first condition for this group of fairly conventional, mainly middle-aged, male trade unionists to create what became a beacon of an alternative economics was building the organisation that eventually provided the means by which many individual intelligences became what Eurig Scandrett refers to as ‘collective’. blog.p2pfoundation.net When the workers nearly took control: five lessons from the Lucas Plan May 14, 2018 Technology networks for socially useful production http://sro.sussex.ac.uk/id/eprint/53574/2/ Technology Networks for Socially Useful Production Adrian Smith Mike Cooley (born 1934) was an Irish-born engineer, writer and former trade union leader who is best known for his work on the social effects of technology, "Socially Useful Production" and "Human Centred Systems". He was involved in workplace activism at the British company Lucas Aerospace in the late 1970s.[1] In 1981, he was a recipient of the Right Livelihood Award for "designing and promoting the theory and practice of human-centred, socially useful production."[2] Cooley was born in Tuam, Ireland, attended the Christian Brothers School and was classmates with Tom Murphy (playwright) and the trade unionist Mick Brennan. He was an apprentice at Tuam Sugar Factory and later studied engineering in Germany, Switzerland and England[3] gaining a PhD in "Computer Aided Design". Cooley has held several leadership positions in the field of computer-aided design (CAD) and was an advisor on numerous public and private sector projects. He was the founding president of the International Research Institute in Human Centred Systems (IRIHCS)[4] and the international Journal AI & Society, and founding director of the Greater London Enterprise Board. He has published over 100 scientific papers and fifteen books, and has been a guest lecturer at universities in Europe, Australia, the US and Japan.[5] His book "Architect or Bee?" has been translated into six languages. Contents •1 Work life ◦1.1 The Lucas Plan ◦1.2 Greater London Enterprise Board (GLEB): 1982 ◦1.3 AI & Society (Founding Chairman): 1987 •2 Publications ◦2.1 Architect or Bee: 1980 ◦2.2 Delinquent Genius: The Strange Affair of Man and His Technology : 1992 (Published 2018) ◦2.3 The Search for Alternatives | Liberating Human Imagination | A Mike Cooley Reader: 1972 to 2007 (Published 2020) ◦2.4 List of books •3 Film, radio and television •4 Awards •5 The Mike Cooley Archive •6 References •7 External links Work life[edit] The Lucas Plan[edit] In the late 1970s, Mike Cooley was a designer at Lucas Aerospace and chaired the local branch of the technical trade union Technical, Administrative and Supervisory Section (TASS). He was one of the militant activists behind The Lucas Plan,[6][7] a radical strategy to avoid workforce layoffs by converting production at Lucas from armaments to civilian products.[8] The plan's aim was to replace weapons manufacture with the development of socially useful goods, like solar heating equipment, artificial kidneys, and systems for intermodal transportation.[1] The goal was to not simply retain jobs, but to design the work so that the workers would be motivated by the social value of their activities. As Cooley put it "the workers are the experts”.[9] The proposals of the alternative plan were not accepted by Lucas management and Cooley was dismissed in 1981,[10] allegedly for spending excessive time upon union business[11] and "concerns of society as a whole".[12] Following his sacking by Lucas he was appointed Technology Director of the GLC and later founded[13] the Greater London Enterprise Board (GLEB).[10] Greater London Enterprise Board (GLEB): 1982[edit] Ken Livingstone and Mike Cooley[13] founded the Greater London Enterprise Board (GLEB) in 1982, which was an industrial development and job creation agency set up by the GLC to create employment by investing in the industrial regeneration of London, with the funds provided by the council, its workers' pension fund and the financial markets. During the first two years of the enterprise board's existence the Greater London council provided a total annual budget of around £30 million, made up of some £20 million section 137 funds and £10 million section 3 mortgage loan facilities. Frank Dobson in Hansard wrote in 1985 when GLEB was under threat of closure, "The Government are not worried because the GLEB has been a failure; they are worried because it has been a success".[14] The GLEB became independent in 1986 when the GLC was abolished; it changed its name to Greater London Enterprise (GLE) and funded its activities from its income.[15] AI & Society (Founding Chairman): 1987[edit] Mike Cooley was the founding chairman of AI & Society, an international forum for socially responsible technology founded in 1987 that focuses on ‘societal issues".(Springer, 2018). Publications[edit] Architect or Bee: 1980[edit] In 1980, Cooley published Architect or Bee? a critique of the automation and computerisation of engineering work. The book alludes to a comparison made by Karl Marx on the creative achievements of human imagination.[16] According to Orlando Hill, "Mike Cooley’s Architect or Bee? put the case that a new organisation of technology could provide social good rather than profit".[17] He goes on to say: "Cooley argues that if we are going to move from merrily producing commodities to producing goods that people need and want, we must change our attitude towards technology. The technology used today evolved from the concept of the division of labour. In a capitalist system in which the maximization of profit is the sole objective and people are regarded as units of labour-power, the division of labour and fragmentation of skills is absolutely rational and scientific. However, the consequence is the deskilling of workers and alienation from reality. A division between theory and practice is created with a bias towards theoretical knowledge. The skill and practical knowledge of the worker is despised."[17] Cooley's work on human-centered systems and socially useful production was compiled and first published by Shirley Cooley, Mike's wife, in 1980 (Hand & Brain publications); the second edition was published in the US in 1982 by South End Press with an introduction from MIT Professor David Noble and was followed by a new edition published by Hogarth Press in 1987 with an introduction by Anthony Barnett. The current edition was published by Spokesman Books in 2016 and has an introduction by Frances O’Grady the General Secretary of the TUC.[18] The book has been translated into over 20 languages[3] including Finnish, Irish and Chinese. In Architect or Bee?, Cooley coined the term "human-centred systems" in the context of the transition in his profession from traditional drafting at a drawing board to computer-aided design.[19] Human-centred systems,[20] as used in economics, computing and design, aim to preserve or enhance human skills, in both manual and office work, in environments in which technology tends to undermine the skills that people use in their work.[21][22] Delinquent Genius: The Strange Affair of Man and His Technology : 1992 (Published 2018)[edit] Mike Cooley's book "Delinquent Genius: The Strange Affair of Man and His Technology" (1992; published 2018) explores the relationship between mankind and technology development.[23] The book analyses the social impact of technology[24] and the dangers of accepting the "one best" scientific idea of progress.[25] According to Adrian Smith, Professor of Technology & Society at the University of Sussex, Cooley looks at "vantage points for realising neglected human purposes – such as creative work and environmental sustainability – through technology." Smith said its chapters "look upon a period of intense restructuring in the industrial manufacturing landscape, whose effects are still felt today".[26] The Search for Alternatives | Liberating Human Imagination | A Mike Cooley Reader: 1972 to 2007 (Published 2020)[edit] The Search for Alternatives | Liberating Human Imagination | A Mike Cooley Reader By Mike Cooley, ISBN 978 085124 8851, Foreword by John Palmer and Introduction by Karamjit S Gill; published by Spokesman books (Jan 2020). The Search for Alternatives By Mike Cooley is a collection of previously published essays which charts the development of his work from 1972 to 2007. The book compliments his previously published work by showing the breath of his essays and theories. "we have become far too smart scientifically to survive much longer without wisdom" Mike Cooley, The Myth of the Moral Neutrality of Technology List of books[edit] •Cooley, Mike (1982). Architect or Bee? The human/technology relationship. Boston: South End Press. ISBN 978-0-89608-131-4. •Cooley, Mike (1988). Produkte für das Leben statt Waffen für den Tod. Germany: Rowohlt Verlag. ISBN 9783499148309. •Cooley, Mike (2016). Architect or Bee? The Human Price of Technology. UK: Spokesman Books. ISBN 978-0-85124-8493. •Cooley, Mike (2018). Delinquent Genius: The Strange Affair of Man and His Technology. UK: Spokesman Books. ISBN 978-085124-878-3. •Cooley, Mike (2020). The Search for Alternatives, Liberating Human Imagination: A Mike Cooley Reader. UK: Spokesman Books. ISBN 978-085124-8851. Film, radio and television[edit] In 1983 Mike Cooley appeared in “Farewell to Work?” produced for Channel Four by Udi Eichler of Brook Productions. On-screen participants included André Gorz, Patrick Minford, Claus Offe and Mike Cooley, and the discussion was chaired by Robert Hutchison. According to the film, technology would "virtually eliminate the manual working class by the end of the century" and displace jobs permanently. Gorz proposes working towards a future in which free time is sustained by a guaranteed minimum income and that production should be confined to essential goods and that people should pursue satisfying and autonomous activities.[27] Mike also features prominently in German filmmaker Harun Farocki's film Wie Man Sieht (As You See, 1983), which examines the emergence of computerization and its effects on military and managerial uses of innovative technology.[failed verification][28] Mike's work was the subject of the TV documentary “Look, No Hands!” in 1988 made for the Equinox Channel Four documentary series. Directed by Christopher Rawlence and produced by Debra Hauer.[29] The film was shown as part of season 1988, Episode 12, on Oct 9, 1988[30] and also produced as a VHS video. In 1997, Cooley appeared is "My Education" by John Quinn, an RTE radio series[31] and book published by Town House.[32] The book is a set of interviews with educationalists discussing their own education and features Mike Cooley,[33] Noam Chomsky, Seamus Heaney and Charles Handy among others.[34][35] Cooley and Quinn also collaborated on “Education for the 1990s”: Three Lectures Given at a Symposium in Radio Telefís Éireann, October 1989 (RTÉ 1989).[36] Cooley appeared in the 2003 Alan Gilsenan documentary "Sing on Forever" about the Irish playwright Tom Murphy (playwright), recalling his friendship with Murphy in Tuam.[37] Awards[edit] Mike Cooley was awarded the Right Livelihood Award in 1981 for "designing and promoting the theory and practice of human-centred, socially useful production".[38] In his acceptance speech, Cooley said, "Science and technology is not given. It was made by people like us. If it's not doing for us what we want, we have a right and a responsibility to change it."[39] The Mike Cooley Archive[edit] The Waterford Institute of Technology Luke Wadding Library acquired Mike Cooley's archive by donation from the Cooley family.[40] The archive includes over 1,400 items including photographs, correspondences, journals, books, drawings, videos, cassette tapes, and slides[3][41] A large part of the archive is related to the Lucas Plan. References[edit] 1.^ Jump up to: a b https://www.theguardian.com/film/2018/oct/14/lucas-aerospace-1970s-plan-documentary-eco-pioneers | Eco-pioneers in the 1970s: how aerospace workers tried to save their jobs – and the planet 2.^ Smith, Adrian; Fressoli, Mariano; Abrol, Dinesh; Arond, Elisa; Ely, Adrian (25 August 2016). Grassroots Innovation Movements. ISBN 9781317451198. 3.^ Jump up to: a b c Stapleton, Larry; o'Neill, Brenda; Cronin, Kieran; Kendrick, Matthew (2019). "Announcing the Professor Cooley archive at Waterford Institute of Technology, Ireland: A celebration of the legacy of Mike Cooley". AI & Society. 34 (2): 377–379. doi:10.1007/s00146-019-00878-y. 4.^ Schmid, Felix; Evans, Stephen; Ainger, Andrew W.S; Grieve, Robert J. (6 December 2012). Computer Integrated Production Systems and Organizations. ISBN 9783642578953. 5.^ https://dblp.org/pers/hd/c/Cooley:Mike | DBLP Computer Science Bibliography 6.^ The Lucas Plan by Hilary Wainwright Schocken Books (1981) ISBN 978-0-8052-8098-2 7.^ "1976: The fight for useful work at Lucas Aerospace". libcom.org. 13 September 2006. Retrieved 16 September 2013. 8.^ https://www.techworld.com/tech-innovation/plan-when-engineers-proposed-socially-useful-goods-over-weapons-3685280/ | The Plan: when engineers proposed socially useful goods over weapons 9.^ https://www.opendemocracy.net/uk/hilary-wainwright/new-economics-of-labour | The new economics of Labour by John McDonnell and Hilary Wainwright 25 February 2018 10.^ Jump up to: a b Smith, Adrian (2014). "Socially Useful Production" (PDF). STEPS Working Papers. 58. Brighton: STEPS Centre: 17. Retrieved 15 October 2016. Cite journal requires |journal= (help) 11.^ Information, Reed Business (21 September 1978). "New Scientist". 12.^ "The Right Livelihood Award website". Archived from the original on 2 October 2013. Retrieved 16 October 2013. 13.^ Jump up to: a b Staff, Guardian (6 April 2000). "The good old days". The Guardian. 14.^ "Greater London Enterprise Board (Hansard, 26 July 1985)". 15.^ "Outside bodies - Greater London Enterprise". February 2019. 16.^ cf Karl Marx, Capital, Volume I 17.^ Jump up to: a b "Architect or Bee? The Human Price of Technology". 18.^ http://www.spokesmanbooks.com/Spokesman/PDF/131OGrady.pdf | Architect or Bee? The human price of technology 19.^ Architect or Bee?, Mike Cooley, South End Press, 1982 20.^ Cooley, Mike (1989). "Human-centred Systems". Designing Human-centred Technology. The Springer Series on Artificial Intelligence and Society. pp. 133–143. doi:10.1007/978-1-4471-1717-9_10. ISBN 978-3-540-19567-2. 21.^ Labor and Monoply Capital. The Degradation of Work in the 20th Century, John Bellamy Foster and Harry Braverman, Monthly Review Press, 1998 22.^ Programmers and Managers: The Routinization of Computer Programmers in the United States, Philip Kraft, 1977 23.^ http://greenleftblog.blogspot.com/2019/02/comments-on-delinquent-genius-by-mike.html 24.^ Gill, Karamjit S. (2019). "DELINQUENT GENIUS: The strange affair of man and his technology". AI & Society. 34 (2): 387–389. doi:10.1007/s00146-018-0875-z. 25.^ https://newrenewextra.blogspot.com/2018/12/technology-and-future-automation.html | Technology and the future: automation by Dave Elliott 26.^ https://steps-centre.org/blog/answers-on-a-postcard-how-would-you-do-technology-differently/ | Answers on a postcard: how would you do technology differently? by Prof Adrian Smith 27.^ "Farewell to Work? (1983)". 28.^ "Harun Farocki: As You See". 29.^ "Look, No Hands! (1988)". 30.^ "Equinox: Look, No Hands! | TVmaze". 31.^ https://www.youtube.com/watch?v=IJ_qIhrWw3s 32.^ ISBN 9781860590726 33.^ "Education matters, but not all learning takes place in school". 34.^ Quinn, John (October 1997). My education. ISBN 9781860590726. 35.^ "Working on his retirement". 36.^ "John Quinn". 37.^ "A restless imagination, dogged by depression". 38.^ "Mike Cooley". 39.^ "Acceptance speech - Mike Cooley". 40.^ "Incredible engineering collection donated to Waterford IT". 41.^ "International experts on artificial intelligence to talk at Prof Mike Cooley Collection announcement | News | Waterford Institute of Technology". External links[edit] From judgment to calculation Mike Cooley Received: 15 December 2006 / Accepted: 19 February 2007  Springer-Verlag London Limited 2007 Abstract We only regard a system or a process as being ‘‘scientific’’ if it displays the three predominant characteristics of the natural sciences: predictability, repeatability and quantifiability. This by definition precludes intuition, subjective judgement, tacit knowledge, heuristics, dreams, etc. in other words, those attributes which are peculiarly human. Furthermore, this is resulting in a shift from judgment to calculation giving rise, in some cases, to an abject dependency on the machine and an inability to disagree with the outcome or even question it. To tolerate such a situation could be seen as an abdication of professional responsibility. In complex technological and scientific environments, it is sometimes said that those who make best use of computers already know what the answer is (in ball park terms) before the calculation. Keywords Judgment to calculation  Human-centred systems  Symbiosis  Tacit knowledge  Lushai Hills Effect  Phylum  Rule-following IT systems frequently come between the professional and the primary task as the real world of touch, shape, size, form (and smell) is replaced by an image on a screen or a stream of data or calculation outputs. This can lead to high levels of abstraction where the ability to judge is diminished. I have described elsewhere the case of a designer using an advanced CAD system who input the decimal point one place to the right and downloaded the resultant output to the production department on a computer-to-computer basis (Cooley 1991). The seriousness of this error was further exacerbated when the designer, shown the resulting component which had been produced, did not even recognise that its dimensions were ten times too large. M. Cooley (&) Technology Innovation Associates, 99 Sussex Place, Slough, UK e-mail: m.cooley@btconnect.com 123 AI & Soc DOI 10.1007/s00146-007-0106-5 Scientific knowledge and mathematical analysis enter into engineering in an indispensable way and their role will continue. However, engineering contains elements of experience and judgment, regard for social considerations and the most effective way of using human labour. These partly embody knowledge which has not been reduced to exact and mathematical form. ‘‘They also embody value judgments which are not amenable to the scientific method.’’ (Rosenbrock 1977). These will be significant issues as IT is increasingly deployed in societal areas such as that of healthcare. Cases already abound and many have become high profile public issues, e.g. the paediatricians who administered a fatal dose of 15 mg of morphine instead of the correct 0.15 mg for the baby (Rogers 1999; Joseph 1999). They did this in spite of being warned by a staff nurse that the dose was obviously incorrect. Those introducing the avalanche of new technologies frequently limit their considerations to first order outcomes. These usually declare the positive and beneficial features, whilst only fleeting attention is given to the downside, if at all. It is as if the laws of thermodynamics no longer apply and that you can get something for nothing. We are now beginning to learn, to our cost, that there are ‘‘no free dinners’’ with technology. For too long we have ignored the double edged nature of science and technology (S&T). Viewed in this light, it has produced the beauty of the Taj Mahal and the hideousness of Chernobyl, the caring therapy of Ro¨ntgen’s Xrays and the destruction of Hiroshima, the musical delights of Mozart and the stench of Bergen Belsen. Most technologies display positive and negative aspects. There is now an urgent need for a new category of competence—an ability to discern the positive and negative aspects of a given technology and to build upon the positive whilst mitigating the negative features. It is not a question of being for, or against technology but rather discerning the positive and beneficial uses of it. One negative aspect of IT technology is the under-valorisation and frequently the squandering of our society’s most precious asset which is the creativity, skill and commitment of its people. Over the past 21 years AI and Society has facilitated a debate on positive alternatives to the existing developments and has placed particular emphasis on the potential for human centred systems. Its articles, reports and the conferences it has facilitated have provided practical examples and case studies of systems design which celebrate human talents. It requires courage, tenacity and profound insights to develop these alternatives in our obsessively technocratic and machine centred culture. The wow factor Technology in its multi-various forms is rapidly becoming all pervasive. It permeates just about every aspect of what we do and who we are. It ranges from the gigantic, such as the diversion of rivers and the repositioning of mountains to the microscopic level of genetic engineering. Science fiction becomes reality as faces are transplanted and head transplants are confidently predicted. AI & Soc 123 The ‘‘wow!’’ factor is mind-blowing. Even simple internet procedures have a God-like quality. With Google Globe you can look down on our planet and travel over continents and countries, quickly homing in on an aerial view of your beloved ‘‘homestead’’ showing your own car in the drive. We now appear as masters of the universe, able to see everything and confident in the belief that any problem we create we can also solve. It is just a question of a plentiful flow of research grants and resources. Meantime, we plan to bury our nuclear waste. Awesome capability We are the only species ever to have it within its power to destroy itself along with our beautiful and frail planet. This is an awesome capability and one for which our culture, education and politics ill prepares us to cope creatively. Change is frequently and thoughtlessly portrayed as progress and progress so unidimensionally defined is evident on all sides. In spite of this, at no time in history have so many people been fearful of the developments surrounding them and are becoming alienated from the society producing them. Doubts are jolted into concerns by global warming events or the looming spectre of an Avian Flu pandemic. Yet it tends to be a fear that dare not speak its name. Who after all, can be against progress, even if it is defined in its own self serving terms? Paths not taken In order to analyse where we are now with IT systems it is important to look back historically to identify turning points at which technology might have and could have developed differently. This is akin to Rosenbrock’s notion of the ‘‘Lushai Hills Effect’’ (Rosenbrock 1988, 1990). He suggests that with technology, we sometimes take a particular route of development and once we have done so we begin to believe that it is the only one. We then develop cultural forms, educational systems and a philosophical outlook which supports that contention. It therefore seems useful at this juncture to explore different interpretations of human and technological progress which may throw light on our present dilemma and indicate alternatives worthy of exploration. Ego smashing events We are indebted to Mazlish (1967) for the notion of technological and scientific development as dismantling discontinuities in historical ego smashing events. The first arises from Copernicus and Galileo which resulted in a re-organisation of the universe with our earth no longer at its centre. The second is based on Darwin who robbed human beings of the particular privilege of having been specially created. AI & Soc 123 The third, based on Freudian insights, suggests that we are not the masters of our own consciousness in the way we had assumed ourselves to be. Our society is now apparently demolishing the fourth discontinuity—the one between humans and their machines. Self elimination ‘‘To put it bluntly, we are now coming to realise that man and the machines he has created are continuous and that the same conceptual systems that help to explain the workings of the human brain also explain the workings of a thinking machine. Man’s pride and his refusal to acknowledge this continuity is the sub-stratum upon which the distrust of technology and industrial society has been reared’’ (Mazlish 1967). However, as we shall suggest later, this sub-stratum of distrust may be overcome if we view human beings and their machines as constituting a symbiosis rather than a convergence. Otherwise, as Karl Pearson (cited in Weizenbaum 1976) puts it: ‘‘The scientific man has above all things, to strive at self elimination in his judgments’’ (Pearson 1976). Walking, feeding, thinking Another conceptual framework which yields interesting insights is to consider technological change as a series of phyla. Rapoport (1963) identifies four. The first phylum consists of tools. Tools appear functionally as extensions of our limbs. While some mechanical advantage may be gained from such a device, it in no way functions ‘‘independently of us.’’ The second phylum is mechanical ‘‘clockworks.’’ Here the human effort in winding up the mechanism is stored as potential energy which may be released. Over a long period of time the clockwork gives the impression of autonomous activity. Furthermore, it is not a prosthetic device to extend our human capabilities but rather one that produces time: hours, minutes ...to pico-seconds. Thus in his seminal work, Lewis Mumford asserts that it is the clock and not the steam engine that is ‘‘the key machine of the modern age’’ as it ‘‘dissociated time from human events and helped create the belief in an independent world of mathematically measurable sequences: the special world of science’’ (Mumford 1963). Weizenbaum points out that clocks ‘‘are the first autonomous machines built by man and until the advent of the computer they remained the only truly important ones.’’ He also asserts ‘‘This rejection of direct experience was to become one of the principal characteristics of modern science’’ (Weizenbaum 1976). The third phylum is heat engines. These gradually emerged as devices that were neither pushed nor pulled but ‘‘fed.’’ The fourth phylum covers devices capable of collecting, storing, transmitting, manipulating, initiating information and determining actions based on these. It will be seen that in each phylum, the device moves toward autonomous capabilities but there is also a form of narcissism—technological narcissism—as AI & Soc 123 clockworks ‘‘walk’’, heat engines ‘‘feed’’ and computers ‘‘think.’’ We design devices with some human attributes and then in a strange dialectical way we begin to perceive ourselves as partial mirror images of the machines. During the early stages of clockworks, drawings showed human sinews and muscles in machine-like manner and De´scartes refers to the human being as a machine. In the era of heat engines there is a growing concern about what and how humans are fed. This is sometimes reflected in concerns about dietary intake and some even suggest could lead to anorexia. The fourth phylum leads to a situation where someone could say disparagingly ‘‘The human mind is the only computer made by amateurs’’ and a high priest of technology was presumably half joking when he said ‘‘Human beings will have to accept their true place in the evolutionary hierarchy: animals, human beings and intelligent machines.’’ Fault in reality The foregoing provides an interesting context in which to view the potential for human centred systems. However, the discussion of such systems has suffered from its questioning of the given orthodoxy in contemporary science. To do so is to elicit the disapproval of many of one’s colleagues. Sympathetic colleagues may imply that you have not grasped the greatness of all that is going on. Less sympathetic colleagues hint that you are questioning rationality itself and are therefore guilty of irrationality. Although Stalinistic psychiatric wards are not threatened, grants may dry up and you can forget that tenured post. Perhaps the students in the sixties had a point with their posters: ‘‘Don’t adjust your mind. There’s a fault in reality.’’ Our culture conveys the sense that a calculation is precise, analytical and scientific. It is regarded as apolitical and objective. Indeed in the sixties, when social scientists were struggling to gain acceptance of their science, many of their papers were awash with calculations and diagrams. However, when I worked in the aerospace industry I found that those who could make best use of computers and calculations already knew in a ‘‘ball park’’ sense what the answer should be and they used computer based calculation as a fine tuning device. They were able to rely on their judgment, so if a discrepancy arose the problem would be re-visited. In spite of this, judgment tends to be regarded as something much less significant. An informed guess—or worse a shot in the dark—is often dismissed as mere speculation. At the level of proficiency, Dreyfus refers to it as ‘‘holistic similarity recognition’’ and points out that ‘‘intuition is the product of deep situational involvement and recognition of similarity.’’ This becomes expertise when ‘‘not only situations but also associated decisions are intuitively understood’’ (Dreyfus and Dreyfus 1986). Using still more intuitive skills the expert can cope with uncertainties and unforeseen or critical situations and has the ability to override or disagree with calculated solutions. Decision making is probably at its best when there is a creative interaction between judgement and calculation. Both have their place in the symbiosis. AI & Soc 123 Intimidation Pivotal to all of this must be whether the output of a calculation is correct and how we can verify its status. Calculations, at least in the temporary sense, can be quite intimidating even if they are completely wrong. Archbishop Ussher, in calculating the age of the world as understood in the Middle Ages, declared it was created in 4004 BC on October 22nd at about 6.00 P.M. (USSHER cited in Rosenbrock 2002 ). Although his calculation was wrong by some billions of years it must have seemed quite impressive at the time. Recently, in a widely publicised trial, the expert witness Sir Roy Meadows declared the probability of two natural unexplained cot deaths occurring in a family was 73 million to 1. The court was impressed. Only later, when the odds were shown to be closer to two hundred to one was the enormity of the error exposed. I have described elsewhere the shift from judgment to calculation with some of the consequences. Initially, these were in the engineering field but are increasingly occurring elsewhere, e.g. in the medical field. I have represented this graphically as a shift from judgment to calculation; from the subjective to the objective and from signal to noise (Cooley 2002). The question may arise as to whether this matters significantly. Perhaps the problems identified are merely transitional ones which occur as the systems are being bedded down. It will be argued by many that this is in the nature of the human progress project. After all, we extended the capacity of our hands through a variety of tools. With spectacles, telescopes, microscopes and scanners we extend our vision. IT technology is merely a further development in which we now extend the capacity of our minds. This is a part of human progress—a speeded up version of the strongest of the tribe climbing to the top of the hill to see what is on the other side. If it could be done then do it! Can we, should we? I hold that it is no longer adequate to ask ‘‘Can we do it?’’ Rather we need to enquire ‘‘Should we do it?’’ The fourth phyla is of a different order to the previous three. The new technologies under consideration have been developed by appropriating human intelligence and objectivising it into computer based programmes and technological procedures. However, this is becoming qualitatively different from previous technological developments in that more and more humans—even at the highest professional levels are becoming increasingly dependent on calculations and systems output. The deep problem arises when human abilities and judgments so atrophy that we are incapable of disagreeing with, questioning or modifying a systems output. A simple example of this is the increasing number of people unable to add a column of figures, even to get an approximate total. AI & Soc 123 Loss of nerve I do believe that we are now at a historical turning point where decisions we make in respect of new technology will have a profound effect upon the manner in which our species develops. As matters now stand we are becoming increasingly dependent— some would say abjectly so—upon machines. Rosenbrock has cautioned against this approach. In the field of computer aided design, the computer is increasingly becoming a sort of automated design manual leaving only minor choices to the design engineer. This he suggests ‘‘seems to me to represent a loss of nerve, a loss of belief in human ability and a further unthinking application of the doctrine of the division of labour.’’ He further points out that the designer is thus reduced to making a series of routine choices between fixed alternatives in which case ‘‘his skill as a designer is not used and decays’’ (Rosenbrock 1977). The same underlying systems design philosophy is now evident across most areas of intellectual activity. The outcome could be an abject dependence on systems and an inability to ‘‘think for ourselves.’’ However, we still have a historical window which may well be closing but which might still allow for the design of systems in a symbiotic manner to make the best use of human attributes together with those of the system. Half a century ago the Turing Test was devised to distinguish between human beings and machines. All around the world today we see examples of humans behaving more like machines and machines more like human beings. The development is in the form of a convergence whereas what is required is one based on symbiosis. Parody becomes reality In the BBC comedy series Little Britain, the character Carol is a bored and indifferent bank employee. When a customer asks for a £2,000 loan she types in a few figures and declares smugly: ‘‘Computer says no.’’ Becoming increasingly anxious the customer makes a number of suggestions including a smaller loan and meeting the Manager. Getting the same response the customer makes a final attempt saying ‘‘Is there anything I can do?’’ Carol whispers to the computer and repeats ‘‘computer says no.’’ All of this is so resonates with the public’s experience that there is a now brisk market for badges, fridge magnets, key-rings and cartoons bearing the slogan ‘‘computer says no.’’ You can even get a ring tone for your mobile declaring it. In the parody Carol at least speaks to the customer but the reality can be much more alarming. When a Rochdale resident had no response whatsoever to three urgent e-mail messages to the Council’s Planning Department objecting to the erection of a structure, he eventually established that the messages had been screened out by Rochdale’s anti-porn software due to his inclusion of the dreaded word ‘‘erection’’ (Press report 2006). The computer had said ‘‘no’’ and the plans were passed before AI & Soc 123 the protest could be considered as the system was devoid of the contextual understanding that a human being would have applied. Such experiences are now becoming common place even as IT equipment manufacturers proudly proclaim in adverts that their products help you ‘‘Take back control.’’ On your bike The nature of technological change in its current form is that propositional knowledge becomes more significant than tacit knowledge. This results in ‘‘know that’’ being more important that ‘‘know how.’’ Tacit knowledge comes from ‘‘learning by doing’’ and results in the ability to judge situations based on experience. Propositional knowledge is based more on analysis and calculation. Within the human centred tradition, a symbiosis of the two and a creative interaction of them is essential. This is particularly true in the case of skilled activities. The nature of tacit knowledge is that (to quote Polanyi): ‘‘There are things we know but cannot tell.’’ In his seminal paper he continues: ‘‘I can say I know how to ride a bicycle or how to swim but it does not mean that I can tell how I managed to keep my balance on a bike or keep afloat when swimming. I may not have the slightest idea of how I do this or even an entirely wrong or grossly imperfect idea of it and yet can go on cycling and swimming merrily.’’ He points out that there are two kinds of knowledge which invariably enter jointly into any act of knowing a complex entity. There is firstly knowing a thing by attending to it. In that way that we attend to the entity as a whole. And secondly there is knowing a thing by relying on our awareness of its purpose of attending to an entity to which it contributes. A detailed explanation of this is given by Polanyi himself (Polanyi 1962). Use–abuse One of the key strands of the debate about human centred systems in the UK arose not so much in academic circles as in the industrial context of Lucas Aerospace. The company employed some 18,000 skilled craftsmen, prototype fitters, engineers, metallurgists, control systems engineers, scientists and laboratory staff. In the early seventies the company was one of the world’s largest manufacturers of aerospace actuators, generators, systems and auxiliary items. It was clear that the company was embarking on a rationalisation strategy and it eventually emerged that some 4,000 of these world class technologists were facing unemployment. Several leading members of trade unions were engaged in debates in the continuing discussion from the sixties on the role S&T in society. These discussions went far beyond the use/abuse model and questioned the nature of S&T itself. AI & Soc 123 There was a vigorous discussion about the gap between the potential of (S&T) and its reality. Furthermore, there was a questioning of the assumption that science—in its own terms at least—had come to monopolise the notion of the rational and could therefore be counter-posed with irrationality and suspicion. Indeed it came to be seen as a means by which irrationality could be exercised. In discussions and exchanges of correspondence with organisations such as the British Society for Social Responsibility in Science through to academics in the USA, it gradually began to be realised that far from being neutral, S&T actually reflected sets of values causing us to speak in terms of the control of nature, the exploitation of natural resources and the manipulation of data. One best way? It was clear that within S&T there is the notion of the ‘‘one best way.’’ However, viewing them as part of culture which produced different music, different literature and different artefacts, why should there not be differing forms of S&T? Furthermore there was an increasing realisation that S&T had embodied within it many of the assumptions of the society giving rise to it. Space will not permit a detailed exploration of these extraordinary developments. Suffice it to say that the workforce produced a plan for what they called ‘‘Socially Useful, Environmentally Desirable Production.’’ They produced and demonstrated a road/rail vehicle, prototypes of city cars and they designed and produced a range of medical products all as an alternative to structural unemployment. There were also a variety of products proposed for third world countries. In discussions dealing with how these products would be produced, it was suggested that producing these in the usual Tayloristic, alienating fashion would be unacceptable and so there arose in parallel a searching and probing discussion about the notion of human centred systems which would celebrate human skill and ingenuity rather than subordinate the human being to the machine or system. In the discussions which led to the widely acclaimed Lucas Workers’ Plan (Cooley 1991), there needed to be practical examples so that the polarised options of development could be recognised. That is, whether the process should be total automation and machine based systems or those which would build on human skill and ingenuity. The EEC sponsored major research programme with research institutes and private companies in Denmark, German and the UK to produce a human centred system and the positive results of this are reported elsewhere (Cooley 1993). Telechirics—high tech, high touch Another practical example arose from the design need to produce a submersible vehicle capable of carrying out repairs in hazardous offshore environments. Initial considerations of a highly automated device indicated the huge computing and feedback capabilities necessary if humans were to be excluded from the process. AI & Soc 123 It was recognised that a telechiric device could work in a remote and hazardous environment but provide feedback—audio, tactile and visual to skilled operators in a safe environment. Such devices were already in use in other hazardous environments such as nuclear power. Thus telechiric devices became one of the product proposals in the Lucas Plan and emphasis was laid upon the wider application of such environments, not least in the medical field. In all cases the systems were designed such as to celebrate and enhance the skill and ability to judge of the human beings involved. Look and feel In the case of surgery, some of the sensationalist press headlines refer to ‘‘robotic surgeons.’’ In fact the reality is that some of these systems would enhance the skill of the surgeon rather than diminish it. An example is in the field of minimal invasive surgery. These systems provide enhanced dexterity precision and control which may be applied to many surgical procedures currently performed using standard laparoscopic techniques In fact the systems now reported, succeed in providing the surgeons with all the clinical and technical capabilities of open surgery whilst enabling them to operate through tiny incisions. As one of the companies producing such systems point out, it succeeds in maintaining the same ‘‘look and feel’’ as that of open surgery. The surgeon is provided with a ‘‘tool’’ to enhance and extend his or her skill whilst the patient may experience a whole range of improved outcomes, e.g. reduced trauma to the body, shorter hospital stay, less scarring and improved cosmesis. It is the judgment of the skilled surgeon that drives the system, not the technology. Cavalier disregard As AI & Society celebrates its 21st birthday, it is gratifying to see the emergence of some systems displaying many of the symbiotic attributes the journal has been espousing. Alas, the dominant tendency is still to confer life on systems whilst diminishing human involvement. Designers do so with cavalier disregard for potential human competence. Quantitative comparisons of human and systems capabilities are questionable and they do not compare like with like. However it is sobering on occasion to reflect upon the ball park comparison. Thus Cherniak (Cherniak 1988) suggests that the massive battle management software of the Strategic Defence Initiative is ‘‘at least a 100 times smaller than the estimated size of the mind’s programme.’’ Networks Human and technology networks can encourage and stimulate people to be innovative and creative. To encourage people to think in these terms, we need a form AI & Soc 123 of enterprise culture. However, universities and conventional secondary schools disregard such attributes because many are not predictable, repeatable or quantifiable. From a democratic standpoint, we need to redirect S&T because more and more of our citizens are opposed to its present form and to those who own and control it. A recent survey of EU citizens shows that if you ask them whom they can believe when informed about issues such as bio-engineering and genetic modification, only about 21% believe that you can accept what the multinationals tell you which suggests to me that there are still a lot of trusting people out there. Then if you ask them what about universities, only 28% say that you can believe what the universities say because they are frequently apologists for the big companies. However, if you ask them ‘‘Can you believe what Greenpeace tells you?’’ 54% will say ‘‘Yes.’’ Now this survey is a very important warning for us. If we have lost the trust of our citizens its no use pleading that they cannot or have not understood, for it is our fault for failing to communicate adequately. There are ways of communicating if we really want without making a virtue out of complexity. Kindred Spirits Challenging the given orthodoxy is a precarious and lonely affair. It is therefore important to build up and participate in a supportive network of kindred spirits. This may take many forms, one example is the Institute Without Walls set up by AI & Society colleagues. The exchange of ideas and the development of collaborative projects are all important. The support of funding bodies was likewise important with the Greater London Enterprise Board gaining EU research funding for Esprit 1217—to design and build and demonstrate a human centred manufacturing system. Funding was also made available by the EU FAST project—to set up a team of experts from EU member states which would produce a report. The ensuing report was entitled ‘‘European Competitiveness in the twenty-fitst century: the integration of Work, Culture and Technology.’’ It was part of the FAST proposal for an R&D programme on ‘‘Human Work in Advanced Technological Environments.’’ The report provided practical examples of human enhancing systems and called for an industrial and cultural renaissance. It advocated that new forms of education should facilitate the transmission of a culture valuing proactive, sensitive and creative human beings. In 1990, the EU commissioned and published 26 reports in its Anthropocentric Production Systems (APS) research papers series. Several of these were based on an analysis of the potential of APS for individual member states. Cherish skill and judgment During the formulation of the original ideas, the International Metalworkers’ Federation held a conference in 1984 and hosted a presentation by the author AI & Soc 123 entitled ‘‘Technology, Unions and Human Needs.’’ The presentation, subsequently published as a 58 page report in 11 languages including Finnish and Japanese, was circulated to the Federation’s members worldwide. Publicity for these ideas at the more popular level was also important as it is spurious to talk about a democratic society if the public can not influence the manner in which technology is developing. In this context, the 1-h television programme in the Channel 4 (London) Equinox series which was presented by the author, caused considerable interest as did a number of interviews and articles in the more popular press. TV Choice London produced an educational video ‘‘Factory of the Future’’ explaining the application of human centred systems which valorise human skill and judgment. The wriggling worm Education—like democracy—can only be partially given and for the remainder, it must be taken. Indeed, taking it is part of the process itself. Some of those designing IT systems for education behave as though a body of knowledge can be downloaded on to a human brain. It is true that some of these systems are impressive and used as a tool to aid human learning they are, and will continue to be of great significance. The range of options, images and supporting films and graphic animation can indeed be overwhelming. However, it should be noted that in many cases they come between us and the real world. They provide us with forms of second and third order reality and information. This may be explained by a simple example. Any child can get an impressive range of support from the internet and learning systems but this form of knowledge is very different from that acquired by one who goes into their local wood, lifts up a stone, picks up a worm and feels it wriggling in the palm of his hand. To this tactile input may then be added contextual information—summer or winter? Farms in the background? Was there the scent and feel of damp soil or decaying leaves? So I suggest that in education in coming years, we are going to acquire learning in developing situations where there will be the form of explicit knowledge, you acquire in a university, but of equal importance will be the implicit knowledge and the informal situations that really advise our lives. It is essential to understand that if we just proceed on this mechanistic basis, the mistakes we make will be truly profound and creative opportunities will be missed. Natural science? We are frequently told that the best way we can proceed is within a rule-based system. This is absolutely extraordinary! As any active trade unionist knows, the way to stop anything in its tracks is to work to rule. It is all the things that we do outside the rule-based system that keeps everything going. As matters now stand, the given scientific methodology can only accept that a procedure is scientific if it displays the three predominant characteristics of the AI & Soc 123 natural sciences: predictability, repeatability and mathematical quantifiability. These by definition preclude intuition, subjective judgment, tacit knowledge, dreams, imagination, heuristics, motivation and so I could go on. So instead of calling these the natural sciences, perhaps they should be re-named the unnatural sciences. There are other ways of knowing the world than by the scientific methodology. Furthermore, when we talk of informating people rather than automating them we need to be clear that we are talking about information and not data. Transforming data into information requires situational understanding which the human can bring to bear. This information can then be so applied as to become knowledge which in turn is absorbed into a culture and thereby becomes wisdom (Cooley 2002). The mistress experience Reductionists have much to answer for. They have intimidated those who proceed on the basis of tacit knowledge. Even the giants of our civilisation were derided by them. Thus we have Leonardo’s spirited riposte: ‘‘They say that not having learning, I will not speak properly of that which I wish to elucidate. But do they not know that my subjects are to be better illustrated from experience than by yet more words? Experience, which has been the mistress of all those who wrote well and thus, as mistress, I will cite her in all cases (Cooley 1991). The academic reductionists had even enacted a law to prevent master builders calling themselves a ‘‘master’’ because it may have been confused with the academic title ‘‘magister.’’ Perfect flower of good manners As early as the thirteenth century, Doctors of Law were moved to protest formally at these academic titles being used by practical people whose structures and designs demonstrated competence of the highest order. Thus the separation between intellectual and manual work; between theory and practice, was being further consolidated at that stage and the title Dr Lathomorum was gradually eliminated. The world was already beginning to change at the time when the following epitaph could be written for the architect who constructed the nave and transepts of Saint Denis: ‘‘Here lies Pierre de Montreuil, a perfect flower of good manners, in this life a Doctor of Stones.’’ Significantly, following this period and in most the European languages there emerged the word DESIGN or its equivalent, coincident with the need to describe the occupational activity of designing. This is not to suggest that designing was a new activity, rather it indicated that designing was to be separated from doing and tacit knowledge separated from propositional knowledge (Cooley 1991). AI & Soc 123 Liberating human imagination Within the human centred tradition, liberating human imagination is pivotal. This is true in the hardest of the sciences as it is in music or literature. Einstein said on one occasion ‘‘imagination is more important than knowledge.’’ Furthermore, when pressed to reveal how he arrived at the theory of relativity, he is said to have responded ‘‘When I was a child of 14 I asked myself what it might be like to ride on a beam of light and look back at the world.’’ In a wider sense, we need to emphasise all the splendid things that humans can do. This is in contrast to the defect model which emphasises what they cannot do. The destructiveness of viewing humans in this manner is dramatically highlighted in the extraordinary passage in James Joyce’s Finnegans Wake when he describes the purveyors of this negative approach as: ‘‘Sniffer of carrion, premature gravedigger, seeker of the nest of evil in the bosom of a good word, you, who sleep at our vigil and fast for our feast, you with your dislocated reason...’’ (Cooley 2005). Confucius This article has been wide ranging and will have raised a number of controversial issues. The references provide a framework in which to explore the ideas further. Some parts of it deal with cutting edge new technologies, yet it is gratifying to think that we can revert to Confucius to encapsulate these ideas so succinctly. ‘‘I hear and I forget. I see and I remember. I do and I understand’’ Now on this 21st birthday the journal can be proud of the impressive body of work it has nurtured. This augers well for the future development of systems which will be more caring of humanity and our precious planet. References Cherniak C (1988) Undebuggability and cognitive science. Commun Assoc Comput Mach 31(4):402–412 Cooley M (1991) Architect or bee?: the human price of technology. Chatto & Windus/The Hogarth Press, London. 2nd Impression 1991 Cooley M (1993) Skill and competence for the 21st century. PROC: IITD conference, Galway, April 1993 Cooley M (2002) Stimulus points: making effective use of IT in health. Workshop. Post Grad Department. Brighton & Sussex Medical School 14.10.2002 Cooley M (2005) Re-Joyceing engineers. Artif Intell Soc 19:196–198 Dreyfus HL, Dreyfus SE (1986) Mind over machine. The Free Press 1986 Joseph C (1999) Article, The Times, London 20.04.1999 Mazlish B (1967) The fourth discontinuity. Technol Cult 8(1):3–4 Mumford L (1963) Technics and civilisation. Harcourt Brace Jovanovich, New York, pp 13–15 Pearson K (1976) Computer power and human reason (cited in Weizenbaum J). WH Freeman & Co, San Francisco, p 25 Polanyi M (1962) Tacit knowing: its bearing on some problems of philosophy. Rev Mod Phys 34(4):601– 616 AI & Soc 123 Press report (2006) Report in Daily Mail 31.05.2006 Rapoport A (1963) Technological models of the minds. In: Sayre KM, Crosson FJ (eds) The modelling of the mind: computers and intelligence. The University of Notre Dame Press, pp 25–28 Rogers L (1999) Article, Sunday Times 18.04.1999, p 7 Rosenbrock HH (1977) The future of control. Automatica 13:389–392 Rosenbrock HH (1988) Engineering as an art. Artif Inell Soc 2:315–320 Rosenbrock HH (1990) Machines with a purpose. Oxford University Press, Oxford, pp 122–124. (See also Book Review in AI & Society vol 5, no.1) Rosenbrock HH (2002) USSHER cited in ‘‘A Gallimaufry of Quaint Conceits’’. Control Systems Centre, UMIST Weizenbaum (1976) Computer power and human reason. WH Freeman & Co., San Francisco, p 25 AI & Soc 123

No comments:

Post a Comment