electrohippies Occasional Paper No.2, December 2000

"I've seen the future and it has a penguin on it"
– how the 'open source' revolution will change IT

By Paul Mobbs/the electrohippie collective




The world of the personal computer is changing. For the past thirty years these changes have been driven by new technology. But as we begin to reach the physical limits of the technology three new factors have taken over to drive change:

This briefing details the current changes in the IT industry as a result of the old orders being challenged by the open source movement, and how the public and community groups can benefit from this change. Fundamentally, the IT revolution of the past 40 years happened because business wanted it to happen. They deemed it to be in our best interests. The 'open source' revolution will happen because people will demand it, and business will see the financial advantage of it, in order to have access to the new electronic society that is developing across the globe.


The PC revolution Part 1 – good for business, bad for poor people

Many years ago, just as the personal computer (PC) revolution was beginning, an industry observer said, "I've seen the future and it computes". In the mid-1970s computers were large, room-sized units, called 'mainframes'. They were very expensive, so could only be used by large organisations, and required specialist operators. They also used various computer operating systems, such as Unix or VMS, that only worked on large computers. Personal computers turned this model on its head.

The first personal computers were developed by electrical engineers, essentially as a hobby. The first mass-produced PC was the Altair, produced in the mid-70s. This had to be programmed, using switches, by hand. The first real functional mass-produced PC was the Apple II, produced in 1978. But things did not really take off until the early 1980s when IBM – the leader in mainframe computers – produced its own PC. Next, the development of networking between PCs began to supplant the role of the large mainframe computer. From the mid-80s, until the mid-1990s, networks of desktop PC's have slowly taken over from large mainframe computers for most business applications.

But the development of the PC has not been all good. A very few corporations dominate hardware (the computer) and software (the programs that run on the computer) production. These corporations make huge sums of money from this domination of the PC market. Some, most prominently Microsoft, have been the subject of government investigations because of this dominant position.

This domination of the PC software market, by Microsoft and most others, is based on one simple factor – all computer software must be licensed for use on computers and, usually, a single copy must be bought for each computer that program is used on. Therefore, a large company might have 100 PCs, and so it buys 100 copies of a word processor, spreadsheet or email package. It may only open one copy of the software, put the rest in a cupboard, and then install that program on all 100 computers. But they must buy copies for all the computers to remain legal.

The production of computer software is led, mainly, by the business market (games do have an influence, but this has limited impacts). In the beginning, the use of computers in the home was limited by the cost of the computer itself. Today the availability of new, cheap systems, and a large choice of second-hand ex-leasing equipment, means that the costs of the computer have dropped significantly. But the costs of software have not. Therefore, if you had twenty old computers that you wanted to give to a community project for nothing, and if you wanted to remain legal, you'd still have to find a large amount of money to cover the costs of the software.

In summary – at the end of the 1990s the PC, and Microsoft's software for it, dominated the business computing industry. But outside the business computing industry the structure of that industry restrict the use of computers for educational and community uses. The effective control of the 'intellectual property' that closed software represents also limits the development of software applications beyond the scope of what software developers believe the market will support.


The PC revolution part 2 – the 'open source' movement

The 'open source' software movement was seen by the mainstream computer industry as an eccentric joke. Students and maverick programmers who spent their time writing computer software to just "give it away". But today, that's exactly what some of the larger software producers are considering.

'Open source' software is computer code that has only a minimal license governing its installation and use. Rather than giving away a working program, normally only the 'program code' (the 'source code') is distributed, and the user can then install this code very simply on their system. There are 20 or more different open source licenses, but they all say much the same thing. The general terms of these licenses are dictated as part of the general 'Open Source Definition':

  1. Free Redistribution – the license may not restrict any party from selling or giving away the code;

  2. Source Code – the program must include source code, and must allow distribution in source code as well as compiled form;

  3. Derived Works – the license must allow modifications and derived works, and must allow them to be distributed under the same terms as the license of the original software;

  4. Integrity of The Author's Source Code – the license may restrict the source-code from being distributed in modified form only if the license allows the distribution of "patch files" with the source code for the purpose of modifying the program at build time (but the license may require derived works to carry a different name or version number from the original software);

  5. No Discrimination Against Persons or Groups – the license must not discriminate against any person or group of persons;

  6. No Discrimination Against Fields of Endeavour – the license must not restrict anyone from making use of the program in a specific field of endeavour, for example, it may not restrict the program from being used in a business, or for military or genetic research;

  7. Distribution of License – the rights attached to the program must apply to all to whom the program is redistributed without the need for execution of an additional license by those parties;

  8. License Must Not Be Specific to a Product – the rights attached to the program must not depend on the program's being part of a particular software distribution (if the program is extracted from that distribution and used or distributed within the terms of the program's license, all parties to whom the program is redistributed should have the same rights as those that are granted in conjunction with the original software distribution);

  9. License Must Not Contaminate Other Software – the license must not place restrictions on other software that is distributed along with the licensed software (for example, the license must not insist that all other programs distributed on the same medium must be open-source software).

So, a person can give away the source code and install it on multiple computers without paying for it. They may even modify the software themselves, to add their own features to suit their own needs, so long as they keep all the identification of the original author intact. Compared to how software has been developed and used over the last 40 years, this is what the revolution is all about.

So, if you give it away, how do you make money? You make money primarily from providing advice on using the software, selling books on how to use it, merchandising, and providing specialist consultancy services to modify features of the software for business users. This may seem strange, but even the mainstream corporations are moving to open source. For example:

Some people worry, particularly business users, that software obtained for free must, by definition, be unreliable. This is not the case. The major Linux software applications, because they are open source, are open to peer review. Many developers working on similar types of package will dip in check of the guts of the program and announce any problems they find. This open development makes inherent faults less likely over the longer term than the development of software by a small team in a closed environment. It also means that many people get to apply their knowledge to problem solving than a small group of company programmers.

The problem with developing open source software is that you usually rely on a non-proprietary operating system to run the programs. To make software applications that run well you have to have access to the inner-works of the operating system. That's very difficult with Windows, DOS (for PCs) and the Mac OS (for the Mackintosh – although Apple have recently released an open-source version of the Mac OS called Darwin), because the full list of functions and operations for closed operating systems is not published.

Ever since the first open source programming groups for the PC set up around the 'GNU project', the operating system – the 'platform' – on which the programs have been run have always been a problem. That problem has now been solved, not by Bill Gates or some similar person, but by a Finnish computer science student. There is now an operating system that is open source, works on the PC (simultaneously alongside an existing Windows operating system), can interface with Windows, and is doubling it's user base every year – Linux.


Linux – Unix power for the PC environment

Unix (and similar operating systems such as VMS) was developed during the 1960s to provide a standard operating platform for mainframe computers. Unix provides the system with basic processes that operate the system. It also enables more complex processes to be operated to run word processors, email, etc.

Problem is, Unix could not be easily transferred to PCs because the hardware – the structures of the computer CPU – were very different. For example, much was made of the 32-bit PC processors such as the '486 and Pentium, but mainframes have been using 32-bit and 64-bit architectures for many years. Unix was a multi-user, multitasking (that is, a lot of people can use the same computer to do different things simultaneously), like Windows is today, long before Bill Gates had even started work on Windows. Unix is also open source, meaning that people have been able to develop applications to suit their needs – hence its continuing popularity.

In the early 80s a PC-based version of Unix, called 'Xenix', was developed. But because Xenix was a very complex system, and because there were not many applications developed for it, it was not popular outside of professional computing environments. You also couldn't directly port (transfer) complex programs from Unix to Xenix.

All was quiet on the PC-based Unix front until 1994. A student at the University of Helsinki, Linus Torvalds, wanted a version of Unix for the PC – so he wrote it. But rather than write it alone, he developed it as an 'open source' code with programmers across the Internet. Therefore he was able to share his ideas with other programmers, to improve and update the system, and develop a professional, stable system. Since the release of Linux 1.0 in March 1994, the number of users has grown exponentially.

Linux has one big advantage over Xenix – it has a large and growing number of applications written for it. Because it's open source, and because users are encouraged to develop their own applications, it is very versatile. Linux also has more than one 'graphical user interface' (GUI) that make using the system very simple – the same principle as the Windows or Mac OS operating systems. Also, unlike Xenix, mainstream software developers such as Corel have committed themselves to developing open source applications for Linux (quite a commitment for a commercial company). This means that there are now professional office applications, such as traditional word processors such as WordPerfect Office, for Linux systems.

Linux, like Unix, is also set up as a 'networked' system. Therefore it's very easy to use as the co-ordinating system for joining computers together. Unix was the choice for running the early Internet because of its networking capabilities, and the compatibility means that Linux is now increasingly used for the same purpose. Linux also has many services devoted to working on the Internet. As well as web, email and chat, it's also possible to set-up more advanced tools such as web servers (many web servers are now Linux based) and email list servers. So long as you have a suitable connection to the Internet, Linux allows you to simply develop Internet services with the features built-in to many of the standard Linux distributions.

Linux also allows you to network many computers in parallel to generate high-powered processing 'clusters'. This is often used for applications requiring enormous amounts of data processing capacity. For example, if you enjoyed watching the ship sink in the last scenes of James Cameron's film Titanic, the computer graphics for that sequence were generated on networked machines using Red Hat Linux as the operating system.

Linux also has one big advantage for those previously committed to Microsoft's systems – it is compatible with both the network and disk filing systems of DOS and Windows. So you don't have to make an 'all or nothing' decision to change to Linux. You can carry on using Microsoft's systems, and then boot into Linux when you want to.

The main problem for those 'converting' from Windows to Linux is the rigidity of the Linux operating system. DOS and Windows are actually very insecure. In Windows/DOS the whole operating system is open to the user, and programs operate within the same environment as the operating system. So, when things go wrong the error can take the whole system with it (the Microsoft 'blue screen of death').

In Linux, like Unix, only one identified user – the 'root user' or 'system administrator' – can make changes to the set up of the computer system, load software and change the accounts of other users. Ordinary users can only access those resources given to them by the 'root' user. They cannot make changes to the computer system, and they only have access to that part of the hard disk set aside for their own use. Additionally, the programs used in Linux are run inside 'shells', so when they crash they can be safely terminated by the operating system. This makes Linux a far safer system to use. It's very difficult to crash the system, and the division between 'users' and the 'root' system administrator means that ordinary users can't change or damage the operating system by accident.

One you are able to change your metal map of the computer from Windows to Linux it is easy to use. As the installation programs become better, it's also getting easier for 'ordinary' people to set up the Linux operating system, even alongside another Windows system. But once mastered, users will find that the power of the Linux system, its safety, stability, and ability to set up and customise to your own needs, makes it far more powerful than Windows. As Linux develops and becomes more popular this simplicity of use is likely to improve further.


Linux 'distributions' – what's in the box?

The Linux operating system is 'open source'. You cannot be charged for it, but if people make it available they can charge for the CD it's written on, or the manuals it's supplied with. Various companies have now developed their own package containing the Linux operating system and various other open source programs called a 'distribution'. You can obtain a Linux distribution in four ways:

Although the Linux kernel is common to all distributions, there are a variety of 'add-ons' with each distribution that differentiate one from another. There are more than 15 different Linux distributions, but the main ones are summarised in the table on next page. For those new to Linux, and who need a helping hand, Red Hat and SuSE probably have the greatest range of manuals and books available on their installation and use. But Caldera's Open Linux is commonly available bundled with Linux books.

Linux will theoretically work on any PC with a '386 processor or better. But for the novice user, you'll have problems installing Linux on and PC with:

Linux will install with less than this, but you have to manually select the components to install, which requires some knowledge of how Linux works. These details are given in the table on the following page.

For those with laptop computers that are fully functional installing Linux should be OK, but you may have problems if specialist driver programs are required for the display or PCMCIA ports. For laptops with restricted operating system, such a those running Windows CE, you can't install Linux (yet – there are hundreds of people globally working on problems like this so a solution usually comes along sometime).

The main problem with Linux is that many hardware manufacturers have developed their peripherals to be compatible with Windows operating systems only. Windows 'plug and play' devices especially can have problems with Linux unless someone has written a Linux driver program to work the hardware (the original distributions are far more likely to have special drivers for plug and play devices than the magazine cover versions). This is one of the main reasons why Linux users keep their systems with both Linux and Windows resident. You can boot into Windows to run the problematic peripherals, such as scanners, digital cameras or special colour printers, and then boot back into Linux to actually manipulate the files generated by accessing the data on Windows' area of the hard disk.

To install Linux the most important thing to assemble is an equipment list. This is a list of the features of your system – such as size of hard disk, memory, type of display card and monitor, and the type of network (if any). The installation program should take you through this process step by step. If you know very little about computers this is not a major problem as the Linux distribution's manual and the installation program will normally explain the concepts and do the complex work for you.

Distributions are being updated all the time. This is partly as a result of the Linux kernel being refined and developed, and as new facilities to handle Windows-based plug and play hardware are added. For example, the technical information on video cards was always kept secret by the manufacturers because it was considered a 'trade secret'. But recently some manufacturers have agreed to give these details to Linux developers to enable Linux-compatible drivers to be added to the common distributions.


The main Linux distributions and their specifications
DistributionGUI included:System requirements:Installation:More information
 KDEGNOMEDisk spaceCPURAMInstall fromInstallation supportWebsite
Caldera Open Linux 2.4yesno170MB (650MB)386 and up; AMD K5 & up32MBCD30 days phone, 90 days e-mailhttp://calderasystems.com/
edesktop/
Corel Linuxyesno500MBPentium and up24MB (64MB)CDUnlimited supporthttp://linux.corel.com/products/
linux_os/
Debian Linux 2.2noyes35MB (800MB)386 & up4MBCD, hard disk, netNot includedhttp://debian.org/
Linux-Mandrake 7.1yesyes400MB (1GB)Pentium and up24MB (64MB)CD100 days e-mailhttp://www.linux-mandrake.com/
en/fpowerpack.php3
Red Hat Linux 6.2yesyes167MB (600MB)386 and up16MB (48MB)CD, net, hard disk90 days web or e-mailhttp://www.redhat.com/products/
rhl_benefits.html
Slackware Linux 7.0yesyes50MB386 and up16MB (64MB)CD, net, floppy diske-mail, phonehttp://www.slackware.com/
getslack/
Storm Linux 2000yesyes300MB (1GB)Pentium and up32MB (64MB)CD30 days phone, 90 days e-mailhttp://www.stormix.com/products/
rain/index_html
SuSE Linux 6.4yesyes600MB (1GB)386 and up32MB (64MB)CD, DVD60 days phone and e-mailhttp://www.suse.com/products/
susesoft/linux/index.html
Turbo Linux 6.0yesyes(750MB)Pentium and up24MB (32MB)CD60 days e-mailhttp://turbolinux.com/products/
tlw/workstation.html
Note: For disk space and RAM, the first number if the minimum possible installation space – the figure in brackets is the recommended level for a GUI-based workstation installation.


Distributions themselves come with different components. Free disks often only give you a basic system. Book distributions give you a working system. Bought distributions themselves usually come in different specifications giving you different levels of system complexity. The typical contents of a distribution are:

You will use very few of these tools to begin with, but as you learn about the capabilities of Linux you may eventually use more of them.

As well as the standard distributions, a wide variety of Linux software can be downloaded from the Internet or cheaply/freely obtained on CD. These have to be installed. Linux has a standard configuration, common across all installations, that allots certain parts of the hard disk to program sources, to installed programs, to configuration files, etc. This makes installing a package a fairly straightforward process:

Linux distributions are improving all the time. For example the new version of the KDE GUI (graphical user interface – like Windows) has just been released, and is likely to be included in new Linux distributions shortly. KDE 2 comes complete with a new set of applications called the KOffice suite. This consists of a spreadsheet application (KSpread), a vector drawing application (KIllustrator), a frame-based word-processing application (KWord), a presentation program (KPresenter), and a chart and diagram application (KChart). The developers of open source software are improving their applications all the time, so getting the latest distribution is always a good idea. Even if you miss a new application, such as KOffice, these can still be downloaded – free, from the Internet.

Linux does take a little while to pick up, particularly if you've been conditioned by Microsoft's operating systems in believing that anything goes. To begin with, once you've set up your system, configured a printer, and perhaps Internet networking, you'll probably have a long period of just getting used to working with Linux. You will fairly soon want to get involved in downloading and installing software to improve your system. Usually, more complex packages, such as word processors, often come with their own 'scripts' which run the installation process for you, so you may not find this too difficult.

After a while you may become more interested in certain aspects of Linux, such as the Internet, networking, or developing small scripts or programs to run little processes for you, and so you may begin to venture intro more adventurous uses of Linux's facilities. But unlike Windows, where you have to get new software, on Linux many of the facilities you will need are already installed on your system.


Linux + Windows – partitioning and 'dual boot' systems

DOS and Windows are rather basic. Linux has far more control over the PCs computer hardware. One significant capability is the installation of a 'dual boot' – you get to choose your operating system when you start the computer. Linux has a special feature called LILO (Linux Loader) so that you can install Linux alongside an existing DOS or Windows system. Then, when you turn on your system, LILO asks you to chose which operating system you wish to use. Linux can also access the files on the computer's Windows/DOS system (but, unfortunately, not the other way around) so you can access your Windows work from Linux.

Most Linux systems require you to 'repartition' the hard disk of the computer – which may be a problem for some people. But more recent versions – such as Red Hat Linux 6.2 – enable you to install Linux on the Windows/DOS formatted hard disk, and choose which system to use at start up.

When setting up a dual boot system with an existing Windows system, you can have problems depending upon the configuration of Windows. The default installation for Windows is to put everything into one 'drive' (usually called 'C:') set up on the hard disk. If your Windows system has a number of 'drives', you can just reassign one of those drives to Linux (but all data on that drive will be lost). For Windows installations with just one drive, you'll have to use a special disk repartition program, such as Partition Magic, that moves data around the hard disk in order to create the space for Linux to use (some distributions, like Caldera's Open Linux, come with Partition Magic for this reason). However, if you set up a dual boot system you should always back up any essential data on the hard disk first – just in case.

If you want a Windows (95 or 98) system and a Linux system, you'll need to have at least a 4 gigabyte hard disk – and more is advisable to ensure you have enough additional disk space for data.

The details of the partitioning and installation for each distribution vary slightly, so you should carefully read the installation manual before proceeding.


Killing obsolescence – giving old hardware new life

If you've ever tried running Windows 95 on an old '486 computer you'd realise that there is a clear relationship between system developers such as Microsoft and the forced obsolescence of computer equipment. Developers like Microsoft who write for the business world, where having the latest equipment is no problem, are greedy for system resources. That of course means that those people who can't afford new computers every 18 months must either carry on using the old equipment, or just be incompatible with the mainstream – a very curious form of 'digital social exclusion'.

There are many perfectly functional '486 computers around (even '386s for that matter) but the latest Microsoft applications won't work on them. You might be able to obtain second hand software that was written for that type of computer five or eight years ago, but it will not be compatible with the file formats used by more recent software – hence, you're still excluded.

Linux has enabled old computers to be put to new uses. An old 166Mhz Pentium, whilst labouring to run Windows 98, will run many Linux applications very well – particularly the text-based ('console') applications. But using Linux is not a handicap because there are many programmers writing add-ins for Linux, usually to satisfy their own needs, so there's likely to be a 'filter' program available to convert a more modern file format into the format you need for your Linux application.

There are therefore clear benefits for small businessmen and people using the computer for office-type applications at home. But the most significant potential is in the development of community computing resources. Community projects often obtain or are donated old computers from local businesses, or large corporations dish them out as part of their 'ethical disposal' schemes. Usually, under the old Windows regime, this didn't produce a lot of benefit because community groups were still restricted by the cost of software. But if Linux applications are used rather than Windows then software costs become minimal, or negligible, in terms of the overall costs. In fact the main overhead after computers and software is likely to be training.

For example, if PCs are bought in bulk (more than 20 at a time) from the auctioning of ex-leasing computers quite reasonably powerful systems can be bought for very little money (the prices vary according to many factors, but can be as low as £50 each for a Pentium computer and monitor). But to put software on each of those systems, even if buying used software, will cost a further £75 to £100 per system because of the licensing issue. However, if you were buy one copy of Corel WordPerfect Office 2000 (price around £80), which includes Corel Open Linux as well as the Corel office applications, you can install that software on all the computers without restriction (why buy? – you could download from the Internet in any case).

Clearly, those wishing to develop community-based computing would be better off using Linux than Windows. This may seem a challenge at the moment, but as Linux become more popular, and more help becomes available on using the system, such packages would be easy to put together.


User control – the potential of Linux to enable 'e-action'

As software, especially for the Internet, becomes more complex, developers are beginning to build in blocks to prevent people doing 'undesirable' things. For example, many companies no longer have email addresses for the public to use. Instead they use forms on the World Wide Web. It would be useful to tie in the monotonous filling of the form to some sort of automated letter writing program for members of the public to use. But recently this possibility was blocked in new web browsers – perhaps because it was being used for precisely that purpose.

As the state, and especially corporations, try to clamp down of the public's unrestricted use of the Internet by the most 'active' members of the public, such restriction on use are ever more likely. But with Linux, those restrictions are meaningless because —

From the point of view of those working for change in society Linux is very exciting. Linux is to the use of computers what the Internet has been to the networking of different campaign groups. Once you move beyond the point where the state or the status quo of the corporate world dominates the IT agenda, anything is possible! There could have been no Seattle or Prague protests without the Internet. So what will Linux ultimately mean for the wider use of computers by campaign groups?

This last point is potentially the most exciting development:

Linux enables so many possibilities – all that is required is the will to develop these possibilities, and the presence of a few people with the skills to – in the traditions of the open source movement – develop and share these tools with others.

As broadband communications are develop over the next few years the potential for Linux as the basis for citizen campaigns will really take off. In a situation where you have a line connected all of the time, for no extra money, having a computer connected 24 hours a day becomes viable for even the smaller campaign groups. Then, not only could the development of web services become simpler, but the tools available through Linux will enable far more effect communication between individuals unfiltered by the state or corporate world.

Currently many campaign groups are denied email or web services by Internet Service Providers because of the 'risk' of legal action, or complains from government or lobby groups about the activities their system is being used for. In a situation where anyone with one of the new ADSL broadband lines can be there own service provider, such 'informal' censorship becomes meaningless. At the community level it also means that geographically identified or communally owned Internet services could spring up at very low costs.

At the more proactive level, Linux is a potential revolution too. A few years ago lobbying or citizen action on the Internet was just not possible – there were just not enough interested people with Internet access to make it happen. Today, everything from student campaigns against sweatshop labour to the recent UK fuel protests have been organised using the Internet. But, as part of a backlash by the establishment against such usage, there are efforts, through informal policies being developed by service providers, through new laws, and through blocks being engineered into the mainstream web browsers, to restrict such action. But, in five years, time there may be enough Linux users to enable Linux tools to be developed that enable citizen action over the Internet in a way unimaginable in the current world that is dominated by closed and proprietary software. What value the informal censorship of campaigners and community groups then?

Perhaps the greatest development over the next five to ten years will be in the field of media. The convergence of the media – of radio, TV, Internet and telephone – means that media ownership is concentrating. This will of course, even though we have more channels, necessitate a greater streamlining of the content of these channels. Groups perceived as 'fringe' or not newsworthy will be written out of this new 'meglo-media'.

Linux solves this problem. With the advent of Linux, the scope for community media is today limited only by bandwidth. Already community groups that had been trying for years to set up radio stations in the UK, and had been consistently denied access to the radio spectrum by government, are today setting up audio streaming on the web instead. Want to run a TV station? When bandwidth permits, live streaming of video will become a reality for any group able to set up a server to stream their output. All that's necessary is a server configured using the Linux applications that are likely to arise over the next few years as these options become a possibility, and access to a good broadband line.


In conclusion – don't get too excited just yet

For many in the e-corporations, there's lot to be gained from open-source. But as explained in this paper, there's a lot that the society as a whole can gain, and so some security organisations and some governments will actually see this as a threat in the longer term. the electrohippies see Linux and open source as a means to realise the benefits of computers and networking to communities every-where. That necessarily entails their use as a lobbying and organisation tool. This will not be welcomed by those who wish to establish the Internet as a commerce-led system.

Proprietary, closed software is all about preserving control. This control is based upon the notion of intellectual property, but really it is all about preserving a monopoly of knowledge to make more money. The open source movement challenges both of these notions, given that it does acknowledge a role for the intellectual property of the program author, whilst at the same time enabling a system where income can be developed from the use of the application.

Linux enables this because it is, in IT jargon, a 'killer app' (a killer application). It is a powerful operating system, conformant with the Unix standard, that works on everyday IBM compatible PCs. It's open source, so programmers are not restricted when developing applications to run on the operating system. It's free to use and distribute, so those with the will can develop their own distribution to serve their own needs.

But for many people, the 'killer' feature is that it's 'not Microsoft'. That's not necessarily to say that the applications aren't written by Microsoft, but rather that the applications and operating systems do not conform to the business-oriented status quo that has been the underpinning philosophy of the IT industry since the 70s. That is, it puts the choice about what system you want, and what you want to do, in the hands of the user. No more animated paperclips in your word processor that you can't turn off! – you chose the features you want.

But this new nirvana for Microsoft detractors or for people wanting to develop their computer resources to suit their needs at low costs won't be along for a few years. We need three things to happen:

This last point is the key. This briefing was drafted under Linux, but finished off and printed under windows, because the printer does not have Linux-compatible drivers. As Linux becomes more popular printers and other peripherals will automatically come with Linux driver – this will begin to erode the last major block to people migrating to Linux.

Linux is a new hopeful future. A return to the basic principles of the people developing homebrew computers in California during the 1970s whose ambition it was to defeat the centralist domination of the corporate mainframe. The entry of IBM into the PC market has, for twenty years, created a closed mentality that has restricted people use and exploration of the potential of computer technology. What Linux creates is a kind of 'freedom of information' and 'freedom of expression' in the best hippie traditions that inspired the early PC developers.

In the view of the electrohippies collective, we have to encourage this movement:

Linux, and open source software, begin to provide a solution to these problems created by the monopolies of the 'closed source' software developers.


the electrohippies collective
December 2000


Further information and resources

Linux, especially as it was developed via the 'Net in the first place, is excellently supported by online information. Most of the Linux distributions have online support, and there area number of Linux sites that provides information on new development and give access to hundreds of open source applications.

The following are just a few sites to get you into the world of Linux and open source – from there you can roam to find many more pages to suit your needs and level of computer knowledge.


Open source sites/info.:

Linux sites:

Linux distribution sites:


END




© 2000 the electrohippie collective. Produced by Paul Mobbs. Released under the GNU Free Documentation License
(with invariant sections being the document title and author identification, no front-cover texts, and no back-cover texts).