History of Computing: Final Essays

Post your final project essays in the comments here by Tuesday, Nov. 19th at 9pm. Prepare the 6 minute oral portion of your project for Thursday’s class. (If you chose to use visuals, remember to include a link to your prezi in your comment–and make sure that your prezi is set to “public” so everyone can see it.) Please leave an extra line of whitespace between each of your papragraphs for formatting (otherwise they run together).

4 comments

  1. Jamalk

    It’s extremely hard to draw a distinctive line between the history of computing, and the history of information technology, because computers are tools to obtain and transfer information. Since the dawn of time, humans realized the importance of information, and how beneficial can obtaining certain knowledge to a group.
    Our ancestors inscribed on their ‘tablets’ how information about seasons, and geography helped their trade and economies, and prevented certain disasters, and also how critical information about their enemies enabled them to win certain battles. However, our ability to obtain, and transfer information flourished during the 21st century, with cutting edge technologies like the internet, and computers with extraordinary processing, and storage abilities.

    The British Empire considered trade, as its most viable source of income, and therefore the British needed to produce accurate Nautical Tables that are critical to navigation, but creating accurate tables required a lot of ‘computers’; people with great expertise in math devoted to calculate these tables. However, a brilliant man named Charles Babbage suggested an ultimate solution by theorizing the first idea of a mechanical computer to calculate tables called the Difference Engine. In 1823 The British government realized the power of Babbage’s suggestion; providing accurate tables efficiently can enrich the economy, and they decided to invest £1,500 in his machine. However, Babbage’s idea was not as glamorous in real life, as it is on papers; Babbage couldn’t complete his differential due to technical difficulties. “Unfortunately, the engineering was more complicated than the conceptualization. Babbage completely underestimated the financial and technical resources he would need to build his engine.” (Martin Campbell-Kelly. “Computer.”) We can see how Babbage’s idea was designed to provide certain kind of information that can provide an outstanding outcome to the economy, and trade in general.

    During the 1860s there were over 75,000 miles of telegraph lines in Britain, and this communication infrastructure revolutionized the way we can transfer information from a region to another, and yet played a vital role during the World War II. During the war computers, the telegraph network, and computer scientists, changed the course of war simply, by decrypting messages in the United Kingdom from the enemy, and sending it to quickly to the command to take the appropriate actions, and this lead the allies to achieve victories over the Nazis during the war– the technicians who worked in decrypting messages in Bletchley Park realized the strong impact of information technology, and how a single piece of information sent in the right time can save lives, and prevent a catastrophe. “Turing’s heroic role in wartime code-breaking had also become publicly acclaimed (it is believed that his work shortened the war in Europe by many months)” (Martin Campbell-Kelly. “Computer.”).

    The most common narrative about the history of computing, and history of information technology talks about the Anglo-American history of computing, and computers like the Harvard Mark I, and the EDSAC and usually tend to ignore the outstanding achievements of other nations like India, and Chile.
    In the 1960s during Allende’s socialist rule, the government wanted to implement massive reforms to the broken economy. Despite the American sanctions at the this time, Allende was able to construct a technologically advanced cybernetic system called Cybersyn, Cybersyn was designed to facilitate the process of command and control over Chile’s economical system, and therefore providing critical information about the economical status in Chile, and help the government to intervene at the right time, to avoid dangerous economical breakdowns. I think project Cybersyn was a significant experiment in history of information technology in general, because it showed how different technologies can be developed upon different ideologies, and economical systems, and can potentially change social standards. “Beer believed the system could provide a way to change how while-collar technologists interacted with blue-collar workers, but it was impossible to undo long-standing prejudices overnight.” (Medina “Cybernetic Revolutionaries”)

    Despite the fact that there were many countries that had different kind of experiences with communication, and computing, the United States took the lead, with a new universal network that was originally designed in 1970s to connect different military labs, and academic institutions around the states to share research. This network grew exponentially around the world to become what we all know now as the Internet.

    The internet provided a massive network to share information about everything we could imagine, and for the benefit for all the people around the world. However, after the content on this network dramatically increased, people found themselves in an increasing need for search engines that can help them find information efficiently, and in a blink of an eye, we found companies like Google becoming massive multinational companies, that are sometimes more powerful than a lot of governments around the world.

    Google didn’t merely provide a search engine, they reached a level where they control a huge portion of everything we see online, and tried their best to keep us connected with an unlimited source of information about everything going on around the world. Google has many ambitious projects, one of them is Google Books. Google wants to scan all the books in the world, and store them electronically on their servers, and provide libraries with access to all content, and users in their homes with snippets from the books.

    The project seems to be extraordinarily amazing, can be considered a huge leap for humanity on the way of achieving our ancestors dream of a universal knowledge accessible by everyone, anywhere, but at the same time the project poses a lot of questions about Google’s intentions, because collecting all this knowledge on Google servers would give Google a full control over information around the world, they can even set prices for this content according to their own will. “A monopoly was being created, a monopoly of access to knowledge” (Google and the World Brain)

    I’m not against the notion sharing information, or collecting the human knowledge, but I’m a firm believer that this knowledge belong to all of us, and the goal of sharing information since the dawn of time was sat for the good of all people. I don’t believe in centralizing information; I don’t want a single company to have all knowledge in this world stored on its servers, because centralism will create fragility in the system, and maybe this can lead to unforgivable consequences similar to the ones happened in the past, when the Library of Alexandria was burnt, and we lost a huge portion of knowledge at that time. A bug, or a virus, can find its way to Google servers, and we lose an incredible amount of information again.

    To conclude, the technologies we have now, went through a lot of evolution to empower us, and the tools we now have to share information, are incredibly powerful, but we need to keep an eye open, and not allow companies or governments to impose any kind of restrictions over one of our most fundamental rights, communication and sharing information.

  2. Karl Kozlowski

    This essay attempts to uncover how the driving factors for innovation determine the shape of the resulting technology in the field of computing, and in some cases, how the driving factors are themselves shaped by innovation.

    Campbell-Kelly et al. present us with several examples of computer technology being shaped by driving factors, including perceived military need shaping the SAGE defense system (Campbell-Kelly et al., 150-151); the need for business efficiency shaping airline reservation systems (Campbell-Kelly et al., 152-157) and UPCs (Campbell-Kelly et al., 162-164); and the desire for fast communication shaping the early Internet (Campbell-Kelly et al., 283-285). But these examples merely scratch the surface; the interplay between the driving factors and the resulting technology can get far more varied and much deeper.

    For instance, we see that the lack of uninterrupted power supplies (UPSs) in India led Hindustan Computers Limited to develop a Power Shutoff Auto Restart mechanism for its 8C (Subramanian, 7). This mechanism allowed large-volume transaction processing to continue operations after a power outage, instead of needing to be restarted from the beginning (Subramanian, 8). We thus see that instead of creating a UPS, they found a way around the lack of one.

    The Internet was shaped by the way in which it was privatized in the early 1990s, which in turn was shaped by at least three issues: “expansion of the network’s infrastructure, the user base, and the types of activities and applications supported”; “the maneuvering of various actors within the NSFnet community to gain greater control over network operations”; and “the emergence of data networking as a national policy issue” (Abbate, 1-2). The privatization itself enabled the Internet to get as big as it did in the 1990s, since “the spread of the Web created new demand for Internet access among the general public just as commercial ISPs were positioning themselves to meet that demand” (Abbate, 8).

    In fact, the resulting Internet growth presented a lot of new legal issues that have further shaped its applications. A prime example of such a legal issue is copyright law. The main driving force behind the creation of the filesharing program Kazaa was to be able to download copyrighted materials while avoiding the enforcement of copyright laws (Goldsmith & Wu, 108-110). Interestingly, Kazaa was ultimately too good for its own good: when its creator Niklas Zennstrom tried to make revenue off it, people used Kazaa to find ways around paying for Kazaa. “[I]ts business model began to depend simultaneously on avoiding and enforcing copyright”, and Kazaa went under as a result (Goldsmith & Wu, 116-118). While all of that was happening, iTunes came out as a legal way to download music while making money for the recording industry (Goldsmith & Wu, 119-120). Suddenly, the legal way of doing things was also the easiest.

    Another company that has struggled with legal issues was a company that has done a lot to shape current technology: Google, Inc. While the driving factors behind Kazaa dealt with legal issues directly from the outset, those behind Google’s products generally run into legal issues indirectly in the aftermath. This is because, as the Google Books incident demonstrated, Google tends to act first and think about consequences later (Film: Google & the World Brain). The driving factor behind Google Books was information availability: the basic goal of the project was to digitize as many books as possible to make the information contained in them readily accessible to the masses. Unfortunately, at least half of the books Google digitized in this project were copyrighted works. Google ended up in a settlement with publishing agencies, which was ultimately voided due to concerns over the competitive edge Google would have gotten as a result (Film: Google & the World Brain).

    Google is an interesting example because the driving factors behind their innovations have generally been admirable, even if the results have been somewhat mixed. The drive to make information available has obviously made life easier for a lot of end users of the Google search engine. But “[t]he worst situation was when someone was put into physical danger from information unearthed by Google… unless there was legal justification for removing the information… Google said it couldn’t do anything” (Levy, 174).

    In summary, we have added several driving factors and their corresponding innovations to the narrative of Campbell-Kelly et al. We have noted HCL finding a way to make computers work properly despite the lack of UPSs in India; the privatization of the Internet leading to its explosive growth; the existence and enforcement of copyright law shaping the way people get music and books online; and the way the Google search engine has turned the Internet into a double-edged sword for its users. While the first two examples reinforce the notion that innovation is shaped by the driving factors behind it, the last three examples suggest that innovation can also shape its own driving factors. There is obviously an interplay between the two, so it would seem that the real question in a given situation is which one is shaping the other.

  3. Kevin President

    Power and control have been viewed as the ultimate achievement in life by many. Throughout the centuries, before computers and up to this day, there has been a noticeable trend in the role of technology in the acquisition and defense of power. Whether it be nations fighting for superiority, the underprivileged people in society fighting for a better life, or the business pushing for economic supremacy, technology has been the ‘secret’ weapon common to them all.
    Between the 1870s and the early 1900s, before one of today’s most widely used technologies, the computer, the economy was not as organized as we know it. The terms market and industry were unknown to the average customer. One of the most innovative technologies at these times were office equipment such as typewriters, cash registers and adding machines. They made the lives of the customers easier by simplifying basic everyday tasks such as writing, but these technology also had the potential to make the investor’s life easier. It was only a matter of time before business minded men and women would exploit those comforts that the various technologies provided for the benefit of their pocket. Gradually, there was a move towards the establishment of markets and industries. For instance, the type writer and cash register market would fall under the office equipment industry. These markets and industries are now the backbones of the modern economy and they may not have developed into these marvels haven’t it been for technology. This was the beginning of technology being used to control the economy.

    A few years later, with the onset of World War I, there was a new need for technology. The war was having very detrimental effects on the economy. “As industrializing nations sought to enhance control over events with information-processing equipment, it was inevitable that such efforts would be turned to the war process” (Cortada, pg80). Therefore, the government took charge and placed many policies in order to keep the system stable. Technology was still needed to make jobs easier, but the tasks at hand were not mere writing but involved taunting calculations. Some of these calculations included the cracking of encrypted codes and the formulation of ballistic tables needed during the war. To accomplish such problems, we saw the birth of the computer. This new technology came in like a storm and wiped out the competition. The army was now able to crack the era’s most difficult encryptions and do calculation more than 10 times faster than their best human computers. This new technology had bestowed a great deal of power to whomever yielded it.

    Whilst governments preached that wars were mainly for the protection of its people. It would be an understatement to say that a war is a fight for superiority between egoistic nations. This superiority created a sense of power and dominance which was yearned by many nations throughout history. Being the most powerful nation was sure to intimidate the weaker nations and thereby built a sense of respect. Seeing how successful computers were, there is no doubt that the army would continue to fund the development of this technology, and that they did.

    The technology the computer was based on developed rapidly over the years, but was very limited in that computers were unable to communicate with each other. Overcoming this limitation was sure to make the military more formidable. The solution to this problem was the Arpanet, which was later taken over by NSFnet which then developed into the internet we know today. Of all the technologies we have today, the internet is by far one of the most powerful and destructive if placed in the wrong hands.

    From the inception of the internet, there had always been questions about who should have control of the internet. “ the work of graduate students, and the attraction of the Arpanet to early participants carries with it a sense of inevitability. But why the Arpanet was built is less frequently addressed” (Lukasik, pg4). A question to this day is still lacking an answer. One of the first instances of this discrepancy is the movement from the NSFnet to the internet. The military by extension the government has been in control of the predecessors of the internet the, Arpanet and the NSFnet, but with the privatization of the internet there were many foreseen consequences “lack of regulations mean that unlike phone companies, backbone providers can, and sometimes do, harm subscribers by cutting off their connectivity on short notice”(Abbate, pg18).

    Even after the privatization of the internet, governments around the world have continued trying to control the internet. They have definitely realized that the power the internet holds can make and break them. Therefore, countries such as China have almost cut of their population from the rest of the world using the internet. China has censored the all content on the internet that has potential to start a revolution.

    Whilst countries such as China are still able to control their people using the internet, the population of other countries such as Egypt and Tunisia have yielded the power of the internet to impose their own authority over the government. In the modern day, the internet is widely used a tool to fight oppression. One of the groups that have used the internet to move up the power pyramid is Anonymous. Anonymous preached a one face, one people, no leader policy. They fought for the freedom of the internet from governmental regulation. They are a perfect example of the power that technology has. Sitting in their rooms, these individual were able to terrorize some of the world’s largest organizations and oppressive regimes. Not to mention, they were very influential in the resignation of the Tunisian government. The Egyptian government were powerful enough follow suite after china and take down the internet in their country, but with the power of technology Anonymous was able to empower the people using their own internet thousand miles away.

    Technology has been around for a very long time. As technology develops, so does the power it holds. Technology has the ability to transform virtual power in to reality.

  4. Adam Eberlin

    The Computer: From Women To Machine

    Long before the age of electronics, dating back to WWI, the identity of a computer was reserved for skilled individuals who were capable of performing tedious or otherwise complex mathematical computations. It wasn’t until much later in the 20th century that the terminology of a computer began to see recognition as a mechanical or electronic machine. Even in modern times, the field of Computer Science is perceived to be highly masculine, and this has been a difficult misconception which society has had trouble breaking free from.

    Prior to the 21st century, employers were “unwilling to invest in workers they perceived as unreliable.” Women were negatively favored in the work place, as they were seen as more likely to stay at home to raise children, and, therefore, unwilling to commit to their careers (VI, 5). However, this epitomized a self-fulfilling prophecy, as women were then allocated with zero potential for advancement, and therefore had no incentive to commit to their career (VI, 5).

    The first machine to even slightly resemble what our generation would identify as a computer was envisioned in 1822 by Charles Babbage, who was tasked with creating a machine to automate the process of computing and printing naval navigational charts for the British Empire (I, 7). Although the commission was wrought with delays and was never successfully carried through to completion, Babbage’s Difference Engine was the first known mechanical machine to perform complex calculations on such a scale. Although history may describe Babbage’s device as a calculator, the fact remains that terms such as the calculator or the computer didn’t see prevalent usage before the 20th century.

    Beginning in the late 1930s, with the onset of WWII, a computer began to be most readily recognized as individual young women who worked fervently to complete complex mathematical calculations, often in large teams. During the war, they were employed with the task of computing firing tables for artillery. As technology advanced with the times, the terminology of a computer evolved to identify those women who maintained the machines or otherwise managed the environment in which they processed data.

    The Colossus, the first electronic, digital, and programmable computer, was designed by Tommy Flowers to implement Max Newman’s mathematical process of decrypting German, Lorenz-encoded messages during the war. The first Colossus was constructed in the Fall of 1943, but it was eventually replicated, and before the end of the war a total of ten of these machines were fully operational in Britain. The top-secret facility at Bletchley Park, where the Colossi were held, along with an array of other code-breaking machines such as Bombes and Heath Robinsons, employed a large team of mostly young, single women to maintain the variety of machines critical to the war effort. During an era of rampant sexism, the Newmanry Wrens, as these teams of women were known, worked long, sequential shifts in order to keep the machines operating and constantly fed in the data (II). They were organized in such a fashion that each woman maintained a fixed position in the process, in which they could become “technically proficient,” (II, 159) but as the war wore on, each worker did multiple jobs. Although it is true that many of the men from home were overseas physically fighting the war, the workplace was highly feminized, and many of the women involved never received recognition for their work or were otherwise promoted.

    On the other side of the Atlantic, women in America were also working to further the allied war effort. At Harvard, Howard Aiken, who was significantly inspired by Babbage’s work, conceived the idea of a machine to compute ballistics tables for navy gunships. A project that began in 1937, it was designed and built by IBM, and installed at Aiken’s lab at Harvard in 1944. Dubbed the Harvard Mark I, the machine was digital and electromechanical, and additionally served as an experimental training site for several early programmers who went on to define the field of Computer Science. Admiral Grace Hopper, one of the most influential figures in the field for decades to come, worked with a select group of male engineers, and Howard Aiken to keep the machine running, although it was easier said than done.

    Aiken, an extremely sexist individual, frequently butted heads with Hopper, and their encounters seemed to be representative of the antagonistic work environment women computers faced in America and England (III, 37-42) (V, 457). Despite all this, Hopper made the best of the situation, developing hands-on experience working with the somewhat temperamental machine, which had no instruction manual and involved a great deal of trial and error. On the rare occasion, her unique sense of humor even made its way to the surface. Indeed, once she mentioned an event where a moth was found stuck in one of the tape reels of the Mark I, “first actual bug found,” she quipped (IV).

    After the war, Grace Hopper became heavily involved in programming for the UNIVAC, circa early 1950s. One of her most significant revelations, the compiler, transformed high-level subroutines into machine code, which significantly improved the speed of both the programming process and the execution of the code (VII, 78). Yet this is not widespread knowledge to the majority of workers in the software engineering field, among those who frequently benefit from her foresight. [While working on UNIVAC Hopper came into contact with many other women working in computing, who had worked on programming the ENIAC.)

    During the mid-20th century, as the next wave of the feminist movement was gathering steam, a few unique, motivated women chose to differentiate themselves from the norm. Although women do not value their careers any less than their male counterparts, they are stuck in a particularly narrow set of circumstances if they want to have a family and raise children, and this applies to any field of expertise. The field of software engineering, however, possesses a fairly unique trait, as it doesn’t necessarily require a specific work environment. In other words, telecommuting as a software engineer is much more prevalent as professions go, and a few remarkable women recognized this possibility and took advantage of it.

    Stephanie Shirley, for example, was a mathematics major who worked at the Post Office Research Station in London in the 1950s (VII, 125). After getting married to a coworker in 1959, she took a new job elsewhere, but was soon faced with the prospect of building a family. Instead of quitting her career and solely dedicating the rest of her life to childrearing, she took a chance and decided to “become her own boss” (VII, 126).

    Shirley quit her new job, and created a startup, “Freelance Programmers Ltd,” in 1962. Working part-time on small, freelance contracts, she eventually garnered enough business contracts that she required additional help in order to fulfill them. She began hiring other women who wanted to raise children but didn’t want to sacrifice their careers in order to do so (VII, 126-127). By 1971, the company had more than 60 employees and had been renamed “F International” (FI), and by 1984, the company had nearly 750 employees, with about “three-quarters of them employed at any given time” (VII, 127). By 1985, the company was raking in annual revenues in excess of £7 million.

    Although this is only a brief set of segments which contain only a partial history of the field of computing, women have built the majority of the foundation of the field and it can only be expected that they will continue to shape the associated professions as we move into a more enlightened and less sexist era. Labor feminization is still prevalent in many fields, but as we’ve seen in the history of computing, extremely determined women can rise up and force modern culture to reconcile its misconceptions.

    References:

    I – Computer: A History of the Information Machine; Campbell-Kelly, Aspray, Ensmenger, Yost.

    II – Colossus: The Secrets of Bletchley Park’s Codebreaking Computers; Copeland.

    III – Grace Hopper and the Invention of the Information Age; Beyer.

    IV – Grace Hopper (film).

    V – When Computers Were Women; Light.

    VI – Only the Clothes Changed: Women Operators in British Computing and Advertising; Hicks.

    VII – Recoding Gender; Abbate.

Leave a Reply to Jamalk Cancel reply

You may use the following HTML:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>