Themes in the history of computing

Last class we discussed some of the larger themes and trends that we’ve encountered so far in our study of computing history. Using those insights, do the following essay assignment which is due by 9pm on Saturday October 5th (a small extension from the due date listed on your syllabus). Your comment will not show up immediately, as I have to review and approve them.

Pick 2 themes we’ve discussed in class and encountered in the readings so far. Write an essay that shows how these themes align, or how they may seem to contradict each other, making sure you have a clear argument which teaches us something new and shows change over time. Length: 450-650 words. (This question will be a good review for the midterm exam on October 10th, so it’s worth putting in a bit of time and effort to ensure you have a good argument well-supported by evidence.)

4 comments

  1. Karl Kozlowski

    Inventors are the ones who come up with the idea for a new technology. Implementers are the ones who actually make new technology. Specialized machines are designed to be used for a specific task. Universal machines are designed to be usable for any task, as long as said task can be programmed into them. In the history of computing, one subject of debate has been whether inventors or implementers hold the power to direct the field. It seems that throughout the history of computing, implementers have held this power, whether the machines of the day were specialized or universal.

    This was clearly true in the nineteenth century. When Charles Babbage proposed his idea for the Analytical Engine in 1833, he lost his means to make it a reality. The main reason was that the British government saw him as more interested in making the Analytical Engine than the Nautical Tables they wanted from him (Campbell-Kelly et al., 8). Babbage was thus an inventor, but not an implementer. On the other hand, analog computers were implemented before and after this, and these were prime examples of specialized machines (Campbell-Kelly et al., 46-50). One of them, the Mechanical Tide Predictor, was improved upon and used extensively until the 1950s (Campbell-Kelly et al., 47-48). This was because they could actually be implemented, unlike their universal counterparts.

    This can also be seen in the inventor and implementers of the Harvard Mark I, which essentially aimed to implement the conceptual ideas behind the Analytical Engine (Campbell-Kelly et al., 56). Howard Aiken started this in 1937, but it was not actually functional until it was completed by IBM and programmed by others in 1944 (Beyer, 36-40). It seems in hindsight that the programmers in question went on to hold the power to direct the field, as the Harvard Mark I became “a fertile training ground for early computer pioneers” (Campbell-Kelly et al., 59).

    This can be seen yet again in the early years of the Cold War. Project Whirlwind “grew out of a contract to design an ‘aircraft trainer’ during World War II” (Campbell-Kelly et al., 143). The goal was to make a flight simulator that could be used for all aircraft, instead of one specific model of aircraft. The engineers working on Project Whirlwind incorporated a vast array of innovative technologies into their design, hoping that its real-time functions could be implemented in far more applications (this was therefore a transition from a specialized machine to a universal one). But the Office of Naval Research was not concerned with such applications, and the project may never have come to fruition if the perceived necessity of the SAGE Defense System did not present such an opportunity (Campbell-Kelly et al., 143-150).

    These examples illustrate that either specialized or universal machines were considered better for each situation, depending primarily on what it would take to implement them. Invention and implementation were considered hand-in-hand in the last example, but many of those inventions might have been in vain if an implementation had not presented itself.

  2. Daniel

    The government as a frequent source of early funding for technological developments has had an interesting effect on the process of technological change. Rather than occasional revolutionary breakthroughs, government-funded development has largely achieved success through evolutionary improvement on existing technologies, even when it is possible that dropping current projects for new ones would ultimately result in faster development. This also applies to later companies that received a large portion of their funding through government contracts. This interaction has resulted in the impressively continuous history of computing tracing back to the early 19th century, up until the rise of private funding in the second half of the twentieth century.

    This phenomenon can be seen as early as 1833, when Charles Babbage wanted to get funding from the British government to discontinue work on his earlier, less impressive Difference Engine to develop his new, forward-thinking Analytical Engine. Wary of the costs incurred by even the relatively tame Difference Engine, the government decided not to finance further development. Babbage was one of the first people in the fledgling “computer” industry to suffer from the conservative tendency of government funding, but he was certainly not the last. Unfortunately, it did not work in his favor as it did for some others.

    Though the government often declines to invest in high-risk, high-reward possibly-revolutionary developments, their disposition towards constant improvement of existing technologies has certainly helped broaden the usage and appeal of computing technologies. For instance, the relatively early adoption of Hollerith information-processing systems in the 1890 census to allow easier manipulation of data while still using a similar workflow for collection (Computer 35).

    This tendency of government to be financially cautious does become somewhat weaker during wartime. Thus the immense expenditure and rash adoption of many projects. Some of these, like ENIAC, ended up doing little for the war effort. They did, however, essentially jump-start the private computing industry. It allowed the industry to avoid the slow development that was historically the rule in this period for an industry, largely as a result of government funding. The electronic calculating machine industry that predated the modern computing industry arguably took nearly a century to mature, dating back to the days of Babbage. This period was essentially condensed into just a couple decades through near-reckless governmental spending around the second World War. The machines and people that were funded during this period were the inspiration and manpower behind dozens of companies, though interestingly not IBM, at least not immediately.

    Among the industry titans in the post-war computing industry, IBM is initially not the behemoth it becomes. IBM existed at the end of the war essentially at the whim of the government. The company was saved from failure during the Depression by immense government need for calculating machines for the New Deal (Computer 69). As such, they were initially more cautious, since they would only do what customers and especially what the government would pay for. This ended up very much in IBM’s favor. Their late entry into the computing industry allowed them to examine the shortfalls of existing solutions, and to dominate the industry within five years. The computing systems developed during this time were a foundation of IBM domination through the 1970s (Computer 163). This was a direct result of IBM’s reliance on government funding, largely because it was financed by existing, less revolutionary electronic equipment still being leased by the government (still 65% of IBM’s income in 1959).

    History shows that government funding often slows the growth of new technologies, especially compared to the funding provided business, as during the mid-twentieth century and today. However, the government is willing to fund that which business will not, and as such deserves the credit of allowing the computer industry to begin, especially as it was developed in the time before industry-funded research and development as we see today.

  3. Yaser Kazmi

    Today, computers and related technology are often seen as a norm to possess. During the early 1940’s and well into the late 1950’s computing was still a developing technology, however. While computing technology was in its development stages it was hindered by fierce opposition. Political and social beliefs during this period caused many of computing’s greatest contributors to become deprived of their abilities. In a very unfortunate manner, social beliefs pushed one Britains greatest inventors and undoubtedly heroes to commit suicide. One can also see the struggles for Grace Hopper and other women.

    Women were at a great disadvantage in the early years of computing despite the substantial amount of effort they had put into its development. “Pioneers such as Hopper are faced with far more than technical conundrums. They must deal with a variety of social and psychological pressures associated with the very act of exploring uncharted intellectual waters.” After WWII, Hopper was victim to alcoholism, considered committing suicide, and ended her marriage.(Beyer 175- 176) The pessimistic view society had of computing created immense stress for those working in the field. Hopper states “‘There were not the same opportunities for women in larger companies like Remington Rand. They were older companies, and the jobs had been stereotyped.’”(Beyer 211) Gender roles created by society negatively effected women. The attitudes of the men at Remington were primarily conservative. These conservative attitudes created a counterproductive environment to women employees like Hopper and Betty Snyder. Although men were not subject to gender discrimination, women and men alike were affected by society and politics during the 1940’s through the 1950’s.

    Keeping the colossus a secret until several decades after the war ended was not a wise decision on behalf of the British government. The politics surrounding the secrecy of the Colossus appeared to be greater than the agenda’s of those who created it. The life of symbolic individuals such as Tommy Flowers and Alan Turing may have turned out much differently if the Colossi were not kept a secret. “The Newmanry’s Colossi might have passed into the public domain at the end of the fighting, to become, like ENIAC, the electronic muscle of a scientific research facility.”(Copeland.Colossus: Breaking the German ‘Tunny’ Code at Bletchley Park. An Illustrated History) Not only would the inventors of the colossus attain instant fame, but withholding the colossus meant that it did not exist to much of the scientific community. The scientific community could have benefitted from its technology and assist in accelerating the field of computing.

    Awareness of Turing’ contributions to the war effort were unbeknownst to many for the colossus project was heavily concealed. “What none of his friends knew in 1954 was that he had been the chief scientific figure in the codebreaking operation throughout World War II.”(Hodges. Alan Turing: Gay codebreaker’s defiance keeps memory alive) What Alan Turing became known for was his homosexual orientation. The British government considered homosexuality a criminal activity. Turing was persecuted due to his sexual activity, and he was eventually victim to a harsh chemical process. “He was obliged to undertake injections of female hormones intended to render him asexual.” (Hodges. Alan Turing: Gay codebreaker’s defiance keeps memory alive) Alan Turing committed suicided in June of 1954. Had the public been aware of his contributions Alan Turing might have not committed suicide, nor become victim to the female hormones. The public would have probably defended him as a wartime hero. The public would have also seen that Alan Turing, a homosexual, had such an important role during the war and for computing. With this knowledge, society could have changed their pessimistic views on homosexuality earlier on.

    Though disheartened many of these individuals were, they committed themselves to advancement and are owed a great deed of respect for their long-lasting efforts. Their unique stories represent the struggles faced by many individuals early in the modern computing era. From their stories we can learn how to refrain our judgement on questionable technologies. Alike we should accept individuals disregarding details such as sexual orientation and gender.

  4. Jamalk

    Universities and Militarization played significant parallel roles to the history of computing. However, Since academics and historians are the ones who usually write history, it’s plausible to forget mentioning practitioners who devoted themselves to tinker with machines. In this comment I will show the nature of roles that universities and governments played, and how contrary to the common narrative this role was not always a positive one.

    The computer started as a military machinery, with very specific purposes like code-breaking and calculating trajectory of missiles. Universities like MIT, Harvard, and The University of Pennsylvania were the prime scene for all these technological breakthroughs; however, universities and in particular those in the US, didn’t show noticeable interest in the period between 1940s to the 1950s for using computers in any purposes others than those defined by governments. Universities merely provided recourses that enabled these machines to operate. For example, the Harvard Mark I started working for the US Navy in 1944.

    This militarization created so many restrictions; people who work with these machines needed to be trustworthy, patriotic, and surprisingly enough, hetrosexual. People who don’t comply with these limitations were considered to be a ‘threat’ to the national security, and an example for that is what happened to Alan Turing in 1952.

    Universities nonetheless, had also its own kind of restrictions. Ive League schools where computers operated at the beginning, were racing for patents, and honors, and this made schools neglect ideas that doesn’t really meet the school agenda. Eckert and Mauchly who were really keen about driving the computer idea to the public, were forced to give their patents to the University of Pennsylvania, and move out to start their own computer company in 1947.

    This militarization, and the discouraging atmosphere in the universities led to a schism between two kinds of programmers/users; those who worked with the governments and had certain privileges like Grace Hooper, and those who were more like hobbyists, who believed in computers as futuristic machines that should be embraced in every field, like Alfred Kinsey who used punched cards machines in 1941 for his research about sexual orientation.

    There is no doubt that computers were primarily born to solve grand problems related to governments, and that computers at that time needed such enormously rich scientific and financial infrastructure that only governments and universities offered, but at the same time we can’t ignore how computers being highly classified, and so expensive, students were not authorized to tinker with these new machines on their own and expand its potential, and this made the field of computer science totally obscure to the public which led to many problems that we are still suffering its consequences till now. However, this exclusivity created a reaction represented by forming computer clubs that will have an important role in the future like the Homebrew Computer Club in 1975.

Post a comment

You may use the following HTML:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>