Tampilkan postingan dengan label how to. Tampilkan semua postingan
Tampilkan postingan dengan label how to. Tampilkan semua postingan

Senin, 08 Maret 2010

How Fingerprint Scanners Work

Computerized fingerprint scanners have been a mainstay of spy thrillers for decades, but up until recently, they were pretty exotic technology in the real world. In the past few years, however, scanners have started popping up all over the place -- in police stations, high-security buildings and even on PC keyboards. You can pick up a personal USB fingerprint scanner for less than $100, and just like that, your computer's guarded by high-tech biometrics. Instead of, or in addition to, a password, you need your distinctive print to gain access.

In this article, we'll examine the secrets behind this exciting development in law enforcement and identity security. We'll also see how fingerprint scanner security systems stack up to conventional password and identity card systems, and find out how they can fail.

Fingerprint Basics

Fingerprints are one of those bizarre twists of nature. Human beings happen to have built-in, easily accessible identity cards. You have a unique design, which represents you alone, literally at your fingertips. How did this happen?

People have tiny ridges of skin on their fingers because this particular adaptation was extremely advantageous to the ancestors of the human species. The pattern of ridges and "valleys" on fingers make it easier for the hands to grip things, in the same way a rubber tread pattern helps a tire grip the road.


The other function of fingerprints is a total coincidence. Like everything in the human body, these ridges form through a combination of genetic and environmental factors. The genetic code in DNA gives general orders on the way skin should form in a developing fetus, but the specific way it forms is a result of random events. The exact position of the fetus in the womb at a particular moment and the exact composition and density of surrounding amniotic fluid decides how every individual ridge will form.

So, in addition to the countless things that go into deciding your genetic make-up in the first place, there are innumerable environmental factors influencing the formation of the fingers. Just like the weather conditions that form clouds or the coastline of a beach, the entire development process is so chaotic that, in the entire course of human history, there is virtually no chance of the same exact pattern forming twice.

Consequently, fingerprints are a unique marker for a person, even an identical twin. And while two prints may look basically the same at a glance, a trained investigator or an advanced piece of software can pick out clear, defined differences.

This is the basic idea of fingerprint analysis, in both crime investigation and security. A fingerprint scanner's job is to take the place of a human analyst by collecting a print sample and comparing it to other samples on record.

Optical Scanner

A fingerprint scanner system has two basic jobs -- it needs to get an image of your finger, and it needs to determine whether the pattern of ridges and valleys in this image matches the pattern of ridges and valleys in pre-scanned images.

There are a number of different ways to get an image of somebody's finger. The most common methods today are optical scanning and capacitance scanning. Both types come up with the same sort of image, but they go about it in completely different ways.

The heart of an optical scanner is a charge coupled device (CCD), the same light sensor system used in digital cameras and camcorders. A CCD is simply an array of light-sensitive diodes called photosites, which generate an electrical signal in response to light photons. Each photosite records a pixel, a tiny dot representing the light that hit that spot. Collectively, the light and dark pixels form an image of the scanned scene (a finger, for example). Typically, an analog-to-digital converter in the scanner system processes the analog electrical signal to generate a digital representation of this image. See How Digital Cameras Work for details on CCDs and digital conversion.

The scanning process starts when you place your finger on a glass plate, and a CCD camera takes a picture. The scanner has its own light source, typically an array of light-emitting diodes, to illuminate the ridges of the finger. The CCD system actually generates an inverted image of the finger, with darker areas representing more reflected light (the ridges of the finger) and lighter areas representing less reflected light (the valleys between the ridges).

Before comparing the print to stored data, the scanner processor makes sure the CCD has captured a clear image. It checks the average pixel darkness, or the overall values in a small sample, and rejects the scan if the overall image is too dark or too light. If the image is rejected, the scanner adjusts the exposure time to let in more or less light, and then tries the scan again.

If the darkness level is adequate, the scanner system goes on to check the image definition (how sharp the fingerprint scan is). The processor looks at several straight lines moving horizontally and vertically across the image. If the fingerprint image has good definition, a line running perpendicular to the ridges will be made up of alternating sections of very dark pixels and very light pixels.

If the processor finds that the image is crisp and properly exposed, it proceeds to comparing the captured fingerprint with fingerprints on file. We'll look at this process in a minute, but first we'll examine the other major scanning technology, the capacitive scanner.

Capacitance Scanner

Like optical scanners, capacitive fingerprint scanners generate an image of the ridges and valleys that make up a fingerprint. But instead of sensing the print using light, the capacitors use electrical current.

The diagram below shows a simple capacitive sensor. The sensor is made up of one or more semiconductor chips containing an array of tiny cells. Each cell includes two conductor plates, covered with an insulating layer. The cells are tiny -- smaller than the width of one ridge on a finger.


The sensor is connected to an integrator, an electrical circuit built around an inverting operational amplifier. The inverting amplifier is a complex semiconductor device, made up of a number of transistors, resistors and capacitors. The details of its operation would fill an entire article by itself, but here we can get a general sense of what it does in a capacitance scanner. (Check out this page on operational amplifiers for a technical overview.)

Like any amplifier, an inverting amplifier alters one current based on fluctuations in another current (see How Amplifiers Work for more information). Specifically, the inverting amplifier alters a supply voltage. The alteration is based on the relative voltage of two inputs, called the inverting terminal and the non-inverting terminal. In this case, the non-inverting terminal is connected to ground, and the inverting terminal is connected to a reference voltage supply and a feedback loop. The feedback loop, which is also connected to the amplifier output, includes the two conductor plates.

As you may have recognized, the two conductor plates form a basic capacitor, an electrical component that can store up charge (see How Capacitors Work for details). The surface of the finger acts as a third capacitor plate, separated by the insulating layers in the cell structure and, in the case of the fingerprint valleys, a pocket of air. Varying the distance between the capacitor plates (by moving the finger closer or farther away from the conducting plates) changes the total capacitance (ability to store charge) of the capacitor. Because of this quality, the capacitor in a cell under a ridge will have a greater capacitance than the capacitor in a cell under a valley.

To scan the finger, the processor first closes the reset switch for each cell, which shorts each amplifier's input and output to "balance" the integrator circuit. When the switch is opened again, and the processor applies a fixed charge to the integrator circuit, the capacitors charge up. The capacitance of the feedback loop's capacitor affects the voltage at the amplifier's input, which affects the amplifier's output. Since the distance to the finger alters capacitance, a finger ridge will result in a different voltage output than a finger valley.

The scanner processor reads this voltage output and determines whether it is characteristic of a ridge or a valley. By reading every cell in the sensor array, the processor can put together an overall picture of the fingerprint, similar to the image captured by an optical scanner.

The main advantage of a capacitive scanner is that it requires a real fingerprint-type shape, rather than the pattern of light and dark that makes up the visual impression of a fingerprint. This makes the system harder to trick. Additionally, since they use a semiconductor chip rather than a CCD unit, capacitive scanners tend to be more compact that optical devices.

Analysis

In movies and TV shows, automated fingerprint analyzers typically overlay various fingerprint images to find a match. In actuality, this isn't a particularly practical way to compare fingerprints. Smudging can make two images of the same print look pretty different, so you're rarely going to get a perfect image overlay. Additionally, using the entire fingerprint image in comparative analysis uses a lot of processing power, and it also makes it easier for somebody to steal the print data.

Instead, most fingerprint scanner systems compare specific features of the fingerprint, generally known as minutiae. Typically, human and computer investigators concentrate on points where ridge lines end or where one ridge splits into two (bifurcations). Collectively, these and other distinctive features are sometimes called typica.

The scanner system software uses highly complex algorithms to recognize and analyze these minutiae. The basic idea is to measure the relative positions of minutiae, in the same sort of way you might recognize a part of the sky by the relative positions of stars. A simple way to think of it is to consider the shapes that various minutia form when you draw straight lines between them. If two prints have three ridge endings and two bifurcations, forming the same shape with the same dimensions, there's a high likelihood they're from the same print.

To get a match, the scanner system doesn't have to find the entire pattern of minutiae both in the sample and in the print on record, it simply has to find a sufficient number of minutiae patterns that the two prints have in common. The exact number varies according to the scanner programming.

Pros and Cons

There are several ways a security system can verify that somebody is an authorized user. Most systems are looking for one or more of the following:
  • What you have
  • What you know
  • Who you are

To get past a "what you have" system, you need some sort of "token," such as an identity card with a magnetic strip. A "what you know" system requires you to enter a password or PIN number. A "who you are" system is actually looking for physical evidence that you are who you say you are -- a specific fingerprint, voice or iris pattern.

"Who you are" systems like fingerprint scanners have a number of advantages over other systems. To name few:

  • Physical attributes are much harder to fake than identity cards.
  • You can't guess a fingerprint pattern like you can guess a password.
  • You can't misplace your fingerprints, irises or voice like you can misplace an access card.
  • You can't forget your fingerprints like you can forget a password.

But, as effective as they are, they certainly aren't infallible, and they do have major disadvantages. Optical scanners can't always distinguish between a picture of a finger and the finger itself, and capacitive scanners can sometimes be fooled by a mold of a person's finger. If somebody did gain access to an authorized user's prints, the person could trick the scanner. In a worst-case scenario, a criminal could even cut off somebody's finger to get past a scanner security system. Some scanners have additional pulse and heat sensors to verify that the finger is alive, rather than a mold or dismembered digit, but even these systems can be fooled by a gelatin print mold over a real finger. (This site explains various ways somebody might trick a scanner.)

To make these security systems more reliable, it's a good idea to combine the biometric analysis with a conventional means of identification, such as a password (in the same way an ATM requires a bank card and a PIN code).

The real problem with biometric security systems is the extent of the damage when somebody does manage to steal the identity information. If you lose your credit card or accidentally tell somebody your secret PIN number, you can always get a new card or change your code. But if somebody steals your fingerprints, you're pretty much out of luck for the rest of your life. You wouldn't be able to use your prints as a form of identification until you were absolutely sure all copies had been destroyed. There's no way to get new prints.

But even with this significant drawback, fingerprint scanners and biometric systems are an excellent means of identification. In the future, they'll most likely become an integral part of most peoples' everyday life, just like keys, ATM cards and passwords are today.

Source : here

Web 3.0

You've decided to go see a movie and grab a bite to eat afterward. You're in the mood for a comedy and some incredibly spicy Mexican food. Booting up your PC, you open a Web browser and head to Google to search for theater, movie and restaurant information. You need to know which movies are playing in the theaters near you, so you spend some time reading short descriptions of each film before making your choice. Also, you want to see which Mexican restaurants are close to each of these theaters. And, you may want to check for customer reviews for the restaurants. In total, you visit half a dozen Web sites before you're ready to head out the door.Some Internet experts believe the next generation of the Web -- Web 3.0 -- will make tasks like your search for movies and food faster and easier. Instead of multiple searches, you might type a complex sentence or two in your Web 3.0 browser, and the Web will do the rest. In our example, you could type "I want to see a funny movie and then eat at a good Mexican restaurant. What are my options?" The Web 3.0 browser will analyze your response, search the Internet for all possible answers, and then organize the results for you.
­
That's not all. Many of these experts believe that the Web 3.0 browser will act like a personal assistant. As you search the Web, the browser learns what you are interested in. The more you use the Web, the more your browser learns about you and the less specific you'll need to be with your questions. Eventually you might be able to ask your browser open questions like "where should I go for lunch?" Your browser would consult its records of what you like and dislike, take into account your current location and then suggest a list of restaurants.

The Road to Web 3.0

Out of all the Internet buzzwords and jargon that have made the transition to the public consciousness, "Web 2.0" might be the best known. Even though a lot of people have heard about it, not many have any idea what Web 2.0 means. Some people claim that the term itself is nothing more than a marketing ploy designed to convince venture capitalists to invest millions of dollars into Web sites. It's true that when Dale Dougherty of O'Reilly Media came up with the term, there was no clear definition. There wasn't even any agreement about if there was a Web 1.0.

YouTubeYouTube is an example of a Web 2.0 site­

Other people insist that Web 2.0 is a reality. In brief, the characteristics of Web 2.0 include:

  • The ability for visitors to make changes to Web pages: Amazon allows visitors to post product reviews. Using an online form, a visitor can add information to Amazon's pages that future visitors will be able to read.
  • Using Web pages to link people to other users: Social networking sites like Facebook and MySpace are popular in part because they make it easy for users to find each other and keep in touch.
  • Fast and efficient ways to share content: YouTube is the perfect example. A YouTube member can create a video and upload it to the site for others to watch in less than an hour.
  • New ways to get information: Today, Internet surfers can subscribe to a Web page's Really Simple Syndication (RSS) feeds and receive notifications of that Web page's updates as long as they maintain an Internet connection.
  • Expanding access to the Internet beyond the computer: Many people access the Internet through devices like cell phones or video game consoles; before long, some experts expect that consumers will access the Internet through television sets and other devices.

Think of Web 1.0 as a library. You can use it as a source of information, but you can't contribute to or change the information in any way. Web 2.0 is more like a big group of friends and acquaintances. You can still use it to receive information, but you also contribute to the conversation and make it a richer experience.

While there are still many people trying to get a grip on Web 2.0, others are already beginning to think about what comes next. What will Web 3.0 be like?

Web 3.0 Basics

Internet experts think Web 3.0 is going to be like having a personal assistant who knows practically everything about you and can access all the information on the Internet to answer any question. Many compare Web 3.0 to a giant database. While Web 2.0 uses the Internet to make connections between people, Web 3.0 will use the Internet to make connections with information. Some experts see Web 3.0 replacing the current Web while others believe it will exist as a separate network.

tropical getaway
©iStockphoto/dstephens
Planning a tropical getaway? Web 3.0 might help simplify your travel plans.­

It's easier to get the concept with an example. Let's say that you're thinking about going on a vacation. You want to go someplace warm and tropical. You have set aside a budget of $3,000 for your trip. You want a nice place to stay, but you don't want it to take up too much of your budget. You also want a good deal on a flight.

With the Web technology currently available to you, you'd have to do a lot of research to find the best vacation options. You'd need to research potential destinations and decide which one is right for you. You might visit two or three discount travel sites and compare rates for flights and hotel rooms. You'd spend a lot of your time looking through results on various search engine results pages. The entire process could take several hours.

According to some Internet experts, with Web 3.0 you'll be able to sit back and let the Internet do all the work for you. You could use a search service and narrow the parameters of your search. The browser program then gathers, analyzes and presents the data to you in a way that makes comparison a snap. It can do this because Web 3.0 will be able to understand information on the Web.

Right now, when you use a Web search engine, the engine isn't able to really understand your search. It looks for Web pages that contain the keywords found in your search terms. The search engine can't tell if the Web page is actually relevant for your search. It can only tell that the keyword appears on the Web page. For example, if you searched for the term "Saturn," you'd end up with results for Web pages about the planet and others about the car manufacturer.

A Web 3.0 search engine could find not only the keywords in your search, but also interpret the context of your request. It would return relevant results and suggest other content related to your search terms. In our vacation example, if you typed "tropical vacation destinations under $3,000" as a search request, the Web 3.0 browser might include a list of fun activities or great restaurants related to the search results. It would treat the entire Internet as a massive database of information available for any query.

Web 3.0 Approaches

You never know how future technology will eventually turn out. In the case of Web 3.0, most Internet experts agree about its general traits. They believe that Web 3.0 will provide users with richer and more relevant experiences. Many also believe that with Web 3.0, every user will have a unique Internet profile based on that user's browsing history. Web 3.0 will use this profile to tailor the browsing experience to each individual. That means that if two different people each performed an Internet search with the same keywords using the same service, they'd receive different results determined by their individual profiles.

cable modem and Earth
©iStockphoto/ktsimage
Web 3.0 will likely plug into your individual tastes and browsing habits.­

The technologies and software required for this kind of application aren't yet mature. Services like TiVO and Pandora provide individualized content based on user input, but they both rely on a trial-and-error approach that isn't as efficient as what the experts say Web 3.0 will be. More importantly, both TiVO and Pandora have a limited scope -- television shows and music, respectively -- whereas Web 3.0 will involve all the information on the Internet.

Some experts believe that the foundation for Web 3.0 will be application programming interfaces (APIs). An API is an interface designed to allow developers to create applications that take advantage of a certain set of resources. Many Web 2.0 sites include APIs that give programmers access to the sites' unique data and capabilities. For example, Facebook's API allows developers to create programs that use Facebook as a staging ground for games, quizzes, product reviews and more.

One Web 2.0 trend that could help the development of Web 3.0 is the mashup. A mashup is the combination of two or more applications into a single application. For example, a developer might combine a program that lets users review restaurants with Google Maps. The new mashup application could show not only restaurant reviews, but also map them out so that the user could see the restaurants' locations. Some Internet experts believe that creating mashups will be so easy in Web 3.0 that anyone will be able to do it.

Other experts think that Web 3.0 will start fresh. Instead of using HTML as the basic coding language, it will rely on some new -- and unnamed -- language. These experts suggest it might be easier to start from scratch rather than try to change the current Web. However, this version of Web 3.0 is so theoretical that it's practically impossible to say how it will work.

The man responsible for the World Wide Web has his own theory of what the future of the Web will be. He calls it the Semantic Web, and many Internet experts borrow heavily from his work when talking about Web 3.0.

Making a Semantic Web

Tim Berners-Lee invented the World Wide Web in 1989. He created it as an interface for the Internet and a way for people to share information with one another. Berners-Lee disputes the existence of Web 2.0, calling it nothing more than meaningless jargon [source: Register]. Berners-Lee maintains that he intended the World Wide Web to do all the things that Web 2.0 is supposed to do.

Tim Berners-Lee
Catrina Genovese/Getty Images

Tim Berners-Lee, the inventor of the World Wide Web­

Berners-Lee's vision of the future Web is similar to the concept of Web 3.0. It's called the Semantic Web. Right now, the Web's structure is geared for humans. It's easy for us to visit a Web page and understand what it's all about. Computers can't do that. A search engine might be able to scan for keywords, but it can't understand how those keywords are used in the context of the page.

With the Semantic Web, computers will scan and interpret information on Web pages using software agents. These software agents will be programs that crawl through the Web, searching for relevant information. They'll be able to do that because the Semantic Web will have collections of information called ontologies. In terms of the Internet, an ontology is a file that defines the relationships among a group of terms. For example, the term "cousin" refers to the familial relationship between two people who share one set of grandparents. A Semantic Web ontology might define each familial role like this:

  • Grandparent: A direct ancestor two generations removed from the subject
  • Parent: A direct ancestor one generation removed from the subject
  • Brother or sister: Someone who shares the same parent as the subject
  • Nephew or niece: Child of the brother or sister of the subject
  • Aunt or uncle: Sister or brother to a parent of the subject
  • Cousin: child of an aunt or uncle of the subject

For the Semantic Web to be effective, ontologies have to be detailed and comprehensive. In Berners-Lee's concept, they would exist in the form of metadata. Metadata is information included in the code for Web pages that is invisible to humans, but readable by computers.

Constructing ontologies takes a lot of work. In fact, that's one of the big obstacles the Semantic Web faces. Will people be willing to put in the effort required to make comprehensive ontologies for their Web sites? Will they maintain them as the Web sites change? Critics suggest that the task of creating and maintaining such complex files is too much work for most people.

On the other hand, some people really enjoy labeling or tagging Web objects and information. Web tags categorize the tagged object or information. Several blogs include a tag option, making it easy to classify journal entries under specific topics. Photo sharing sites like Flickr allow users to tag pictures. Google even has turned it into a game: Google Image Labeler pits two people against each other in a labeling contest. Each player tries to create the largest number of relevant tags for a series of images. According to some experts, Web 3.0 will be able to search tags and labels and return the most relevant results back to the user. Perhaps Web 3.0 will combine Berners-Lee's concept of the Semantic Web with Web 2.0's tagging culture.

Even though Web 3.0 is more theory than reality, that hasn't stopped people from guessing what will come next.

Beyond Web 3.0

Whatever we call the next generation of the Web, what will come after it? Theories range from conservative predictions to guesses that sound more like science fiction films.

Paul Otellini at CES
David Paul Morris/Getty Images

Paul Otellini, CEO and President of Intel, discusses the increasing importance of mobile devices on the Web at the 2008 International Consumer Electronics Show.­

Here are just a few:

  • According to technology expert and entrepreneur Nova Spivack, the development of the Web moves in 10-year cycles. In the Web's first decade, most of the development focused on the back end, or infrastructure, of the Web. Programmers created the protocols and code languages we use to make Web pages. In the second decade, focus shifted to the front end and the era of Web 2.0 began. Now people use Web pages as platforms for other applications. They also create mashups and experiment with ways to make Web experiences more interactive. We're at the end of the Web 2.0 cycle now. The next cycle will be Web 3.0, and the focus will shift back to the back end. Programmers will refine the Internet's infrastructure to support the advanced capabilities of Web 3.0 browsers. Once that phase ends, we'll enter the era of Web 4.0. Focus will return to the front end, and we'll see thousands of new programs that use Web 3.0 as a foundation [source: Nova Spivack].
  • The Web will evolve into a three-dimensional environment. Rather than a Web 3.0, we'll see a Web 3D. Combining virtual reality elements with the persistent online worlds of massively multiplayer online roleplaying games (MMORPGs), the Web could become a digital landscape that incorporates the illusion of depth. You'd navigate the Web either from a first-person perspective or through a digital representation of yourself called an avatar (to learn more about an avatar's perspective, read How the Avatar Machine Works).
  • The Web will build on developments in distributed computing and lead to true artificial intelligence. In distributed computing, several computers tackle a large processing job. Each computer handles a small part of the overall task. Some people believe the Web will be able to think by distributing the workload across thousands of computers and referencing deep ontologies. The Web will become a giant brain capable of analyzing data and extrapolating new ideas based off of that information.
  • The Web will extend far beyond computers and cell phones. Everything from watches to television sets to clothing will connect to the Internet. Users will have a constant connection to the Web, and vice versa. Each user's software agent will learn more about its respective user by electronically observing his or her activities. This might lead to debates about the balance between individual privacy and the benefit of having a personalized Web browsing experience.
  • The Web will merge with other forms of entertainment until all distinctions between the forms of media are lost. Radio programs, television shows and feature films will rely on the Web as a delivery system.

It's too early to tell which (if any) of these future versions of the Web will come true. It may be that the real future of the Web is even more extravagant than the most extreme predictions. We can only hope that by the time the future of the Web gets here, we can all agree on what to call it.

Source : here

Jumat, 05 Maret 2010

How Web Operating Systems Work

As the Web evolves, people invent new words to describe its features and applications. Sometimes, a term gains widespread acceptance even if some people believe it's misleading or inaccurate. Such is the case with Web operating systems.

AstraNOS
2008 ©HowStuffWorks
The AstraNOS operating system login screen.­

An operating system (OS) is a special kind of program that organizes and controls computer hardware and software. Operating systems interact directly with computer hardware and serve as a platform for other applications. Whether it's Windows, Linux, Unix or Mac OS X, your computer depends on its OS to function.

That's why some people object to the term Web OS. A Web OS is a user interface (UI) that allows people to access applications stored completely or in part on the Web. It might mimic the user interface of traditional computer operating systems like Windows, but it doesn't interact directly with the computer's hardware. The user must still have a traditional OS on his or her computer.

While there aren't many computer operating systems to choose from, the same can't be said of Web operating systems. There are dozens of Web operating systems available. Some of them offer a wide range of services, while others are still in development and only provide limited functionality. In some cases, there may be a single ambitious programmer behind the project. Other Web operating systems are the product of a large team effort. Some are free to download, and others charge a fee. Web operating systems can come in all shapes and sizes.

What do Web operating systems do?

Web operating systems are interfaces to distributed computing systems, particularly cloud or utility computing systems. In these systems, a company provides computer services to users through an Internet connection. The provider runs a system of computers that include application servers and databases.

With some systems, people access the applications using Web browsers like Firefox or Internet Explorer. With other systems, users must download a program that creates a system-specific client. A client is software that accesses information or services from other software. In either case, users access programs that are stored not on their own computers, but on the Web.

What sort of services do they provide? Web operating systems can give users access to practically any program they could run on a computer's desktop. Common applications include:

  • Calendars
  • E-mail
  • File management
  • Games
  • Instant messaging programs
  • Photo, video and audio editing programs
  • RSS readers
  • Spreadsheet programs
  • Word processing programs

With traditional computer operating systems, you'd have to install applications to your own computer. The applications would exist on your computer's hard disk drive. They would run by accessing the processing power of your computer's central processing unit (CPU) by sending electronic requests to your computer's OS.

Web operating systems can't replace your computer's native OS -- in fact, they depend on traditional computer operating systems to work. The user side of Web OS software, whether it's a Web browser or a system-specific client, runs on top of your computer's OS. But programmers design Web operating systems to look and act like a desktop OS. A Web OS might look a lot like a traditional OS, but it doesn't manage your computer's hardware or software.

iGoogle
©2008 HowStuffWorks­
Portals like iGoogle aren't true operating systems, but they do pull information from other Web pages into a centralized site.

A Web OS allows you to access applications stored not on your computer, but on the Web. The applications exist wholly or in part on Web servers within a particular provider network. When you save information in an application, you might not store it on your computer. Instead, you save the information to databases connected to the Internet. Some Web operating systems also give you the option to save information to your local hard disk drive.

Because Web operating systems aren't tied to a specific computer or device, you can access Web applications and data from any device connected to the Internet. That is, you can do it as long as the device can run the Web operating software (whether that's a particular Web browser or client). This means that you can access the Web OS on one computer, create a document, save the work and then access it again later using a completely different machine. Web operating systems offer users the benefit of accessibility -- data isn't tied down to your computer.

The Technology of Web Operating Systems


With so many different Web operating systems either currently available or in development, it should come as no surprise that programmers use different approaches to achieve the same effect. While the goal of a Web OS is to provide an experience similar to using a desktop OS, there are no hard and fast rules for how to make that happen. The two most popular approaches rely on Flash technologies or Asynchronous JavaScript and XML (AJAX) technologies.

Flash is a set of technologies that enable programmers to create interactive Web pages. It's a technology that uses vector graphics. Vector graphics record image data as a collection of shapes and lines rather than individual pixels, which allows computers to load Flash images and animation faster than pixel-based graphics.

Flash files stream over the Internet, which means the end user accessing the file doesn't have to wait for the entire file to download to his or her computer before accessing parts of it. With Flash-based programs like YouTube's video player, this means you can start watching a film clip without having to download it first.

More than 98 percent of all computers connected to the Internet have a Flash player installed [source: Adobe]. That makes Flash an attractive approach for many programmers. They can create a Web OS knowing that the vast majority of computer users will be able to access it without having to download additional software.

AJAX technologies rely on hypertext markup language (HTML), the JavaScript programming language, Cascading Style Sheets (CSS) and eXtensible Markup Language (XML). It's a browser technology. The HTML language is a collection of markup tags programmers use on text files that tell Web browsers how to display the text file as a Web page. CSS is a tool that gives programmers more options when tweaking a Web site's appearance. Programmers can create a style sheet with certain attributes such as font style and color, and then apply those styles across several Web pages at once. JavaScript is a programming language that allows applications to send information back and forth between servers and browsers. XML is a markup language, which means programmers use it to describe the structure of information within a file and how it relates to other information.

The "asynchronous" aspect of AJAX means that AJAX applications transfer data between servers and browsers in small bits of information as needed. The alternative is to send an entire Web page to the browser every time something changes, which would significantly slow down the user's experience. With sufficient skill and knowledge, a programmer can create an AJAX application with the same functions as a desktop application.

Like Flash, most computers can run AJAX applications. That's because AJAX isn't a new programming language but rather a way to use established Web standards to create new applications. As long as an application programmer includes the right information in an application's code, it should run fine on any major Web browser. Some well known Web applications based on AJAX include Google Calendar and Gmail.

Why Use a Web OS?


Web operating systems simplify a user's experience when accessing applications hosted on remote servers. Ideally, a Web OS behaves like a desktop OS. The more familiar and intuitive the system, the faster people will learn how to use it. When a person chooses to run a certain application, his or her computer sends a request to the system's control node -- a special server that acts as a system administrator. The control node interprets the request and connects the user's client to the appropriate application server or database. By offloading applications, storage and processing power to a remote network, users don't have to worry about upgrading computer systems every few years.
YouOS
©2008 HowStuffWorks
YouOS is one of the more popular Web operating systems on the Internet.­

For many people, that's the most attractive feature of Web operating systems. As long as their computers can run the browser or client software necessary to access the system, there's no need to upgrade. Some people become frustrated when they have to purchase new computers in order to run current software. With distributed computing, it's the provider's responsibility to provide application functionality. If the provider isn't able to meet user demands, users might look elsewhere for services.

Web operating systems can also make it easier to share data between computers. Perhaps you own both a Mac computer and a PC. It can be challenging to share data between the two different computers. Even if you use file formats that are compatible with both Mac computers and PCs, you could end up with a copy of the same file on each machine. Changing one copy isn't reflected on the other computer's copy. Web operating systems provide an interface where you can use any computer to create, modify and access a single copy of a file saved on a remote database. As long as the Web OS you're using can cross platforms, meaning it works on both Macs and PCs, you'll be able to work on the file at any time using either of your computers.

Likewise, Web operating systems can simplify collaborative projects. Many Web operating systems allow users to share files. Each user can work from the file saved to the system's native network. For many users, this is an attractive alternative to organizing multiple copies of the same file and then incorporating everyone's changes into a new version.

Source : here

Byte Prefixes and Binary Math

When you start talking about lots of bytes, you get into prefixes like kilo, mega and giga, as in kilobyte, megabyte and gigabyte (also shortened to K, M and G, as in Kbytes, Mbytes and Gbytes or KB, MB and GB). The following table shows the binary multipliers:

Name
Abbr.
Size
Kilo
K
2^10 = 1,024
Mega
M
2^20 = 1,048,576
Giga
G
2^30 = 1,073,741,824
Tera
T
2^40 = 1,099,511,627,776
Peta
P
2^50 = 1,125,899,906,842,624
Exa
E
2^60 = 1,152,921,504,606,846,976
Zetta
Z
2^70 = 1,180,591,620,717,411,303,424
Yotta
Y
2^80 = 1,208,925,819,614,629,174,706,176


You can see in this chart that kilo is about a thousand, mega is about a million, giga is about a billion, and so on. So when someone says, "This computer has a 2 gig hard drive," what he or she means is that the hard drive stores 2 gigabytes, or approximately 2 billion bytes, or exactly 2,147,483,648 bytes. How could you possibly need 2 gigabytes of space? When you consider that one CD holds 650 megabytes, you can see that just three CDs worth of data will fill the whole thing! Terabyte databases are fairly common these days, and there are probably a few petabyte databases floating around the Pentagon by now.

Binary math works just like decimal math, except that the value of each bit can be only 0 or 1. To get a feel for binary math, let's start with decimal addition and see how it works. Assume that we want to add 452 and 751:

  452
+ 751
---
1203


To add these two numbers together, you start at the right: 2 + 1 = 3. No problem. Next, 5 + 5 = 10, so you save the zero and carry the 1 over to the next place. Next, 4 + 7 + 1 (because of the carry) = 12, so you save the 2 and carry the 1. Finally, 0 + 0 + 1 = 1. So the answer is 1203.

Binary addition works exactly the same way:

  010
+ 111
---
1001

Starting at the right, 0 + 1 = 1 for the first digit. No carrying there. You've got 1 + 1 = 10 for the second digit, so save the 0 and carry the 1. For the third digit, 0 + 1 + 1 = 10, so save the zero and carry the 1. For the last digit, 0 + 0 + 1 = 1. So the answer is 1001. If you translate everything over to decimal you can see it is correct: 2 + 7 = 9.

To sum up, here's what we've learned about bits and bytes:

  • Bits are binary digits. A bit can hold the value 0 or 1.
  • Bytes are made up of 8 bits each.
  • Binary math works just like decimal math, but each bit can have a value of only 0 or 1.
Source : here

The Standard ASCII Character Set

Bytes are frequently used to hold individual characters in a text document. In the ASCII character set, each binary value between 0 and 127 is given a specific character. Most computers extend the ASCII character set to use the full range of 256 characters available in a byte. The upper 128 characters handle special things like accented characters from common foreign languages.

You can see the 127 standard ASCII codes below. Computers store text documents, both on disk and in memory, using these codes. For example, if you use Notepad in Windows 95/98 to create a text file containing the words, "Four score and seven years ago," Notepad would use 1 byte of memory per character (including 1 byte for each space character between the words -- ASCII character 32). When Notepad stores the sentence in a file on disk, the file will also contain 1 byte per character and per space.

Try this experiment: Open up a new file in Notepad and insert the sentence, "Four score and seven years ago" in it. Save the file to disk under the name getty.txt. Then use the explorer and look at the size of the file. You will find that the file has a size of 30 bytes on disk: 1 byte for each character. If you add another word to the end of the sentence and re-save it, the file size will jump to the appropriate number of bytes. Each character consumes a byte.

If you were to look at the file as a computer looks at it, you would find that each byte contains not a letter but a number -- the number is the ASCII code corresponding to the character (see below). So on disk, the numbers for the file look like this:

     F   o   u   r     a   n   d      s   e   v   e   n
    70 111 117 114 32 97 110 100 32 115 101 118 101 110

By looking in the ASCII table, you can see a one-to-one correspondence between each character and the ASCII code used. Note the use of 32 for a space -- 32 is the ASCII code for a space. We could expand these decimal numbers out to binary numbers (so 32 = 00100000) if we wanted to be technically correct -- that is how the computer really deals with things.

The first 32 values (0 through 31) are codes for things like carriage return and line feed. The space character is the 33rd value, followed by punctuation, digits, uppercase characters and lowercase characters. To see all 127 values, check out Unicode.org's chart.

We'll learn about byte prefixes and binary math next.

Source : here

The Base-2 System and the 8-bit Byte

The reason computers use the base-2 system is because it makes it a lot easier to implement them with current electronic technology. You could wire up and build computers that operate in base-10, but they would be fiendishly expensive right now. On the other hand, base-2 computers are relatively cheap.

So computers use binary numbers, and therefore use binary digits in place of decimal digits. The word bit is a shortening of the words "Binary digIT." Whereas decimal digits have 10 possible values ranging from 0 to 9, bits have only two possible values: 0 and 1. Therefore, a binary number is composed of only 0s and 1s, like this: 1011. How do you figure out what the value of the binary number 1011 is? You do it in the same way we did it above for 6357, but you use a base of 2 instead of a base of 10. So:

(1 * 2^3) + (0 * 2^2) + (1 * 2^1) + (1 * 2^0) = 8 + 0 + 2 + 1 = 11

You can see that in binary numbers, each bit holds the value of increasing powers of 2. That makes counting in binary pretty easy. Starting at zero and going through 20, counting in decimal and binary looks like this:

0 =     0
1 = 1
2 = 10
3 = 11
4 = 100
5 = 101
6 = 110
7 = 111
8 = 1000
9 = 1001
10 = 1010
11 = 1011
12 = 1100
13 = 1101
14 = 1110
15 = 1111
16 = 10000
17 = 10001
18 = 10010
19 = 10011
20 = 10100

When you look at this sequence, 0 and 1 are the same for decimal and binary number systems. At the number 2, you see carrying first take place in the binary system. If a bit is 1, and you add 1 to it, the bit becomes 0 and the next bit becomes 1. In the transition from 15 to 16 this effect rolls over through 4 bits, turning 1111 into 10000.

Bits are rarely seen alone in computers. They are almost always bundled together into 8-bit collections, and these collections are called bytes. Why are there 8 bits in a byte? A similar question is, "Why are there 12 eggs in a dozen?" The 8-bit byte is something that people settled on through trial and error over the past 50 years.

With 8 bits in a byte, you can represent 256 values ranging from 0 to 255, as shown here:

 0 = 00000000
1 = 00000001
2 = 00000010
...
254 = 11111110
255 = 11111111

In the article How CDs Work, you learn that a CD uses 2 bytes, or 16 bits, per sample. That gives each sample a range from 0 to 65,535, like this:
    0 = 0000000000000000
1 = 0000000000000001
2 = 0000000000000010
...
65534 = 1111111111111110
65535 = 1111111111111111

Next, we'll look at one way that bytes are used.

Source : here

Introduction to How Bits and Bytes Work

If you have used a computer for more than five minutes, then you have heard the words bits and bytes. Both RAM and hard disk capacities are measured in bytes, as are file sizes when you examine them in a file viewer.

You might hear an advertisement that says, "This computer has a 32-bit Pentium processor with 64 megabytes of RAM and 2.1 gigabytes of hard disk space." And many HowStuffWorks articles talk about bytes (for example, How CDs Work). In this article, we will discuss bits and bytes so that you have a complete understanding.

Decimal Numbers
The easiest way to understand bits is to compare them to something you know: digits. A digit is a single place that can hold numerical values between 0 and 9. Digits are normally combined together in groups to create larger numbers. For example, 6,357 has four digits. It is understood that in the number 6,357, the 7 is filling the "1s place," while the 5 is filling the 10s place, the 3 is filling the 100s place and the 6 is filling the 1,000s place. So you could express things this way if you wanted to be explicit:

(6 * 1000) + (3 * 100) + (5 * 10) + (7 * 1) = 6000 + 300 + 50 + 7 = 6357

Another way to express it would be to use powers of 10. Assuming that we are going to represent the concept of "raised to the power of" with the "^" symbol (so "10 squared" is written as "10^2"), another way to express it is like this:

(6 * 10^3) + (3 * 10^2) + (5 * 10^1) + (7 * 10^0) = 6000 + 300 + 50 + 7 = 6357

What you can see from this expression is that each digit is a placeholder for the next higher power of 10, starting in the first digit with 10 raised to the power of zero.

­That should all feel pretty comfortable -- we work with decimal digits every day. The neat thing about number systems is that there is nothing that forces you to have 10 different values in a digit. Our base-10 number system likely grew up because we have 10 fingers, but if we happened to evolve to have eight fingers instead, we would probably have a base-8 number system. You can have base-anything number systems. In fact, there are lots of good reasons to use different bases in different situations.

Computers happen to operate using the base-2 number system, also known as the binary number system (just like the base-10 number system is known as the decimal number system). Find out why and how that works in the next section.

Source : here