Rabu, 25 April 2012

SCTP Protocol

This is an implementation of the SCTP protocol as defined in RFC2960 and RFC3309. It is a message oriented, reliable transport protocol with direct support for multihoming that runs on top of ip(7), and supports both v4 and v6 versions.
Like TCP, SCTP provides reliable, connection oriented data delivery with congestion control. Unlike TCP, SCTP also provides message boundary preservation, ordered and unordered message delivery, multi-streaming and multi-homing. Detection of data corruption, loss of data and duplication of data is achieved by using checksums and sequence numbers. A selective retransmission mechanism is applied to correct loss or corruption of data.
This implementation supports a mapping of SCTP into sockets API as defined in the draft-ietf-tsvwg-sctpsocket-10.txt(Sockets API extensions for SCTP). Two styles of interfaces are supported.
A one-to-many style interface with 1 to MANY relationship between socket and associations where the outbound association setup is implicit. The syntax of a one-to-many style socket() call is
sd = socket(PF_INET, SOCK_SEQPACKET, IPPROTO_SCTP);
A typical server in this style uses the following socket calls in sequence to prepare an endpoint for servicing requests.
1. socket()
2. bind()
3. listen()
4. recvmsg()
5. sendmsg()
6. close()
A typical client uses the following calls in sequence to setup an association with a server to request services.
1. socket()
2. sendmsg()
3. recvmsg()
4. close()
A one-to-one style interface with a 1 to 1 relationship between socket and association which enables existing TCP applications to be ported to SCTP with very little effort. The syntax of a one-to-one style socket() call is
sd = socket(PF_INET, SOCK_STREAM, IPPROTO_SCTP);
A typical server in one-to-one style uses the following system call sequence to prepare an SCTP endpoint for servicing requests:
1. socket()
2. bind()
3. listen()
4. accept()
The accept() call blocks until a new association is set up. It returns with a new socket descriptor. The server then uses the new socket descriptor to communicate with the client, using recv() and send() calls to get requests and send back responses. Then it calls
5. close()
to terminate the association. A typical client uses the following system call sequence to setup an association with a server to request services:
1. socket()
2. connect()
After returning from connect(), the client uses send() and recv() calls to send out requests and receive responses from the server. The client calls
3. close()
to terminate this association when done.
Adress Formats

SCTP is built on top of IP (see ip(7)). The address formats defined by ip(7) apply to SCTP. SCTP only supports point-to-point communication; broadcasting and multicasting are not supported.

Sysctl
These variables can be accessed by the /proc/sys/net/sctp/* files or with the sysctl(2) interface. In addition, most IP sysctls also apply to SCTP. See ip(7).
addip_enable
Enable SCTP ADDIP(Dynamic Address Reconfiguration) Support. This is off by default.
association_max_retrans
Maximum number of consecutive retransmissions to a peer before an endpoint considers that the peer is unreachable and closes the association. The default value is 10.
cookie_preserve_enable
Handle COOKIE PRESERVATIVE parameter in the INIT chunk. This is on by default.
hb_interval
This is the interval when a HEARTBEAT chunk is sent to a destination transport address to monitor the reachability of an idle destination transport address. The default is 30 seconds and is maintained in msecs.
max_burst
Maximum number of new data packets that can be sent in a burst. The default value is 4.
max_init_retransmits
Maximum number of times an INIT chunk or a COOKIE ECHO chunk is retransmitted before an endpoint aborts the initialization process and closes the association. The default value is 8.
path_max_retrans
Maximum number of consecutive retransmissions over a destination transport address of a peer endpoint before it is marked as inactive. The default value is 5.
prsctp_enable
Enable PR-SCTP. This is on by default.
rcvbuf_policy
This controls the socket receive buffer accounting policy. The default value is 0 and indicates that all the associations belonging to a socket share the same receive buffer space. When set to 1, each association will have its own receive buffer space.
rto_alpha_exp_divisor
This is the RTO.Alpha value when expressed in right shifts and is used in RTO calculations. The default value is 3.
rto_beta_exp_divisor
This is the RTO.Beta value when expressed in right shifts and is used in RTO calculations. The default value is 2.
rto_initial
This is the initial value of RTO(retransmission timeout) that is used in RTO calculations. The default value is 3 seconds and is maintained in msecs.
rto_max
This is the maximum value of RTO(retransmission timeout) that is used in RTO calculations. The default value is 60 seconds and is maintained in msecs.
rto_min
This is the minimum value of RTO(retransmission timeout) that is used in RTO calculations. The default value is 1 second and is maintained in msecs.
sack_timeout
Delayed SACK timeout. The default value is 200msecs.
sndbuf_policy
This controls the socket sendbuffer accounting policy. The default value is 0 and indicates that all the associations belonging to a socket share the same send buffer space. When set to 1, each association will have its own send buffer space.
valid_cookie_life
This is the maximum lifespan of the Cookie sent in an INIT ACK chunk. The default value is 60 secs and is maintained in msecs.
StatisticThese variables can be accessed by the /proc/net/sctp/* files.
assocs
Displays the following information about the active associations. assoc ptr, sock ptr, socket style, sock state, association state, hash bucket, association id, bytes in transmit queue, bytes in receive queue, user id, inode, local port, remote port, local addresses and remote addresses.
eps
Displays the following information about the active endpoints. endpoint ptr, sock ptr, socket style, sock state, hash bucket, local port, user id, inode and local addresses.
snmp
Displays the following statistics related to SCTP states, packets and chunks.
SctpCurrEstab
The number of associations for which the current state is either ESTABLISHED, SHUTDOWN-RECEIVED or SHUTDOWN-PENDING.
SctpActiveEstabs
The number of times that associations have made a direct transition to the ESTABLISHED state from the COOKIE-ECHOED state. The upper layer initiated the association attempt.
SctpPassiveEstabs
The number of times that associations have made a direct transition to the ESTABLISHED state from the CLOSED state. The remote endpoint initiated the association attempt.
SctpAborteds
The number of times that associations have made a direct transition to the CLOSED state from any state using the primitive 'ABORT'. Ungraceful termination of the association.
SctpShutdowns
The number of times that associations have made a direct transition to the CLOSED state from either the SHUTDOWN-SENT state or the SHUTDOWN-ACK-SENT state. Graceful termination of the association.
SctpOutOfBlues
The number of out of the blue packets received by the host. An out of the blue packet is an SCTP packet correctly formed, including the proper checksum, but for which the receiver was unable to identify an appropriate association.
SctpChecksumErrors
The number of SCTP packets received with an invalid checksum.
SctpOutCtrlChunks
The number of SCTP control chunks sent (retransmissions are not included). Control chunks are those chunks different from DATA.
SctpOutOrderChunks
The number of SCTP ordered data chunks sent (retransmissions are not included).
SctpOutUnorderChunks
The number of SCTP unordered chunks(data chunks in which the U bit is set to 1) sent (retransmissions are not included).
SctpInCtrlChunks
The number of SCTP control chunks received (no duplicate chunks included).
SctpInOrderChunks
The number of SCTP ordered data chunks received (no duplicate chunks included).
SctpInUnorderChunks
The number of SCTP unordered chunks(data chunks in which the U bit is set to 1) received (no duplicate chunks included).
SctpFragUsrMsgs
The number of user messages that have to be fragmented because of the MTU.
SctpReasmUsrMsgs
The number of user messages reassembled, after conversion into DATA chunks.
SctpOutSCTPPacks
The number of SCTP packets sent. Retransmitted DATA chunks are included.
SctpInSCTPPacks
The number of SCTP packets received. Duplicates are included.
Socket OptionTo set or get a SCTP socket option, call getsockopt(2) to read or setsockopt(2) to write the option with the option level argument set to SOL_SCTP.
SCTP_RTOINFO.
This option is used to get or set the protocol parameters used to initialize and bound retransmission timout(RTO). The structure sctp_rtoinfo defined in /usr/include/netinet/sctp.h is used to access and modify these parameters.
SCTP_ASSOCINFO
This option is used to both examine and set various association and endpoint parameters. The sturcture sctp_assocparams defined in /usr/include/netinet/sctp.h is used to access and modify these parameters.
SCTP_INITMSG
This option is used to get or set the protocol parameters for the default association initialization. The structure sctp_initmsg defined in /usr/include/netinet/sctp.h is used to access and modify these parameters.
Setting initialization parameters is effective only on an unconnected socket (for one-to-many style sockets only future associations are effected by the change). With one-to-one style sockets, this option is inherited by sockets derived from a listener socket.
SCTP_NODELAY
Turn on/off any Nagle-like algorithm. This means that packets are generally sent as soon as possible and no unnecessary delays are introduced, at the cost of more packets in the network. Expects an integer boolean flag.
SCTP_AUTOCLOSE
This socket option is applicable to the one-to-many style socket only. When set it will cause associations that are idle for more than the specified number of seconds to automatically close. An association being idle is defined an association that has NOT sent or received user data. The special value of 0 indicates that no automatic close of any associations should be performed. The option expects an integer defining the number of seconds of idle time before an association is closed.
SCTP_SET_PEER_PRIMARY_ADDR
Requests that the peer mark the enclosed address as the association primary. The enclosed address must be one of the association's locally bound addresses. The structure sctp_setpeerprim defined in /usr/include/netinet/sctp.h is used to make a set peer primary request.
SCTP_PRIMARY_ADDR
Requests that the local SCTP stack use the enclosed peer address as the association primary. The enclosed address must be one of the association peer's addresses. The structure sctp_prim defined in /usr/include/netinet/sctp.h is used to make a get/set primary request.
SCTP_DISABLE_FRAGMENTS
This option is a on/off flag and is passed an integer where a non-zero is on and a zero is off. If enabled no SCTP message fragmentation will be performed. Instead if a message being sent exceeds the current PMTU size, the message will NOT be sent and an error will be indicated to the user.
SCTP_PEER_ADDR_PARAMS
Using this option, applications can enable or disable heartbeats for any peer address of an association, modify an address's heartbeat interval, force a heartbeat to be sent immediately, and adjust the address's maximum number of retransmissions sent before an address is considered unreachable. The structure sctp_paddrparams defined in /usr/include/netinet/sctp.h is used to access and modify an address's parameters.
SCTP_DEFAULT_SEND_PARAM
Applications that wish to use the sendto() system call may wish to specify a default set of parameters that would normally be supplied through the inclusion of ancillary data. This socket option allows such an application to set the default sctp_sndrcvinfo structure. The application that wishes to use this socket option simply passes in to this call the sctp_sndrcvinfo structure defined in /usr/include/netinet/sctp.h. The input parameters accepted by this call include sinfo_stream, sinfo_flags, sinfo_ppid, sinfo_context, sinfo_timetolive. The user must set the sinfo_assoc_id field to identify the association to affect if the caller is using the one-to-many style.
SCTP_EVENTS
This socket option is used to specify various notifications and ancillary data the user wishes to receive. The structure sctp_event_subscribe defined in /usr/include/netinet/sctp.h is used to access or modify the events of interest to the user.
SCTP_I_WANT_MAPPED_V4_ADDR
This socket option is a boolean flag which turns on or off mapped V4 addresses. If this option is turned on and the socket is type PF_INET6, then IPv4 addresses will be mapped to V6 representation. If this option is turned off, then no mapping will be done of V4 addresses and a user will receive both PF_INET6 and PF_INET type addresses on the socket.
By default this option is turned on and expects an integer to be passed where non-zero turns on the option and zero turns off the option.
SCTP_MAXSEG
This socket option specifies the maximum size to put in any outgoing SCTP DATA chunk. If a message is larger than this size it will be fragmented by SCTP into the specified size. Note that the underlying SCTP implementation may fragment into smaller sized chunks when the PMTU of the underlying association is smaller than the value set by the user. The option expects an integer.
The default value for this option is 0 which indicates the user is NOT limiting fragmentation and only the PMTU will effect SCTP's choice of DATA chunk size.
SCTP_STATUS
Applications can retrieve current status information about an association, including association state, peer receiver window size, number of unacked data chunks, and number of data chunks pending receipt. This information is read-only. The structure sctp_status defined in /usr/include/netinet/sctp.h is used to access this information.
SCTP_GET_PEER_ADDR_INFO
Applications can retrieve information about a specific peer address of an association, including its reachability state, congestion window, and retransmission timer values. This information is read-only. The structure sctp_paddr_info defined in /usr/include/netinet/sctp.h is used to access this information.
Authors :
Sridhar Samudrala

source: here

Sabtu, 13 Maret 2010

How to Set Up SCTP in Linux

Before we start to set up SCTP in Linux, Fedora 12 with kernel over 2.6.31 should be prepared. Because Fedora 12 has SCTP kernel as the kernel module, kernel recompile is not needed for our case. Instead, the SCTP kernel module is simply needed to be loaded into the RAM memory on the Fedora 12 with the command ‘modprobe’. The command ‘modprobe SCTP’ plays a role on loading SCTP module into the RAM; see Figure 1


Figure 1 Load SCTP module into the kernel

The SCTP module should be loaded on both the server and the client. After that, it can be assumed that both the server and client have already configured theLinux platform so that they are capable of supporting the SCTP protocol. The next step is to activate the DAR extension of SCTP, to ensure that mSCTP is supported by Linux. The parameter ‘addip_enable’ is the indicator whether DAR extension is active or not. When ‘addipenable’ is 0, Add-IP extension is inactive while it is active when ‘addip-enable’ is 1.

Command ‘echo 1>/proc/sys/net/sctp/addip_enable’ is used to make Linux support mSCTP. Command ‘more /proc/sys/net/sctp/addip_enable’ approves the information.

See Figure 2:

Figure 5.3 Active Add-IP extension of SCTP

One problem with the SCTP protocol in Linux is that it does not support SCTP APIs itself, while SCTP APIs are required to be used for coding the mSCTP handover. At this point, we downloaded an additional tool from http://sourceforge.net/projects/lksctp/files/ called LKSCTP, which is able to provide SCTP API functions. There are many versions of the LKSCTP tool, the latest one is 1.0.11. The one used in our testbed is version 1.0.10. The following steps have been taken to build LKSCTP in Linux:
  • Become root user to install LKSCTP by command: su –
  • Enter the LKSCTP directory containing the download RPM files of LKSCTP by command cd /root/fde13 (my own directory).
  • Install the RPM flies by command: rpm *.lksctp-tools-1.0.10-1.rpm
  • Untar the LKSCTP tools directory from the gzipped tarball by command: tar –xzvf lksctp-tools-1.0.10.tar
  • Enter the LKSCTP tool directory by command: cd /lksctp-tools-1.0.10
  • configure LKSCTP by command: ./configure
  • make LKSCTP by command: make
After the success of “make” operation, the LKSCTP tools has been loaded into the Linux
kernel. The following Figure shows how to check whether LKSCTP is supported by
Linux or not.

Figure 5.4 LKSCTP tools for Linux

In Figure 5.4, the command ‘checksctp’ indicates whether the server and the client support LKSCTP or not. The result shows that both of them support LKSCTP.

Senin, 08 Maret 2010

Soft Handover Procedure in mSCTP

In this section, we describe how to use mSCTP for soft handover in the transport layer. For an example, we consider a mobile node (MN) that initiates an SCTP association with a correspondent node (CN) in IPv6 networks. The case in IPv4 has similar procedures as those in IPv6 networks. After initiation of an SCTP association, the MN moves from access router A to access router B, as shown in Fig. 1.


Figure 1

It is assumed that an MN initiates an association with a CN. The resulting SCTP association consists of IP address 2 for MN and IP address 1 for CN. Then the procedural steps described
below, from Step 1 through 4, will be repeated whenever the MN moves to a new location, until the SCTP association will be released.

Step 1) Obtaining an IP address for a new location: Let us assume that the MN moves from AR A to AR B and thus it is now in the overlapping region. In this phase, we also need to assume that the MN can obtain a new IP address 3 from the AR B by using IPv6 stateless
address configuration.

Step 2) Adding the new IP address to the SCTP association: After obtaining a new IP address, the MN’s SCTP informs the CN’s SCTP that it will use a new IP address. This is done by sending an SCTP ASCONF chunk to the CN. The MN receives the responding ASCONF-ACK Chunk from the CN.

Step 3) Changing the primary IP address: While the MN further continues to move toward AR B, it needs to change the new IP address into its primary IP address according at an appropriate rule. Actually, the configuration of a specific rule to trigger this “primary address change” is a challenging issue of the mSCTP.

Step 4) Deleting the old IP address from the SCTP association: As the MN progresses to move toward AR B, if the old IP address gets inactive, the MN must delete it from the address list. The rule for determining if the IP address is inactive may also be implemented by using information from the underlying network
or physical layer.

Source : mSCTP for Soft Handover in Transport Layer, Seok Joo Koh, Moon Jeong Chang, and Meejeong Lee, Member, IEEE

How Fingerprint Scanners Work

Computerized fingerprint scanners have been a mainstay of spy thrillers for decades, but up until recently, they were pretty exotic technology in the real world. In the past few years, however, scanners have started popping up all over the place -- in police stations, high-security buildings and even on PC keyboards. You can pick up a personal USB fingerprint scanner for less than $100, and just like that, your computer's guarded by high-tech biometrics. Instead of, or in addition to, a password, you need your distinctive print to gain access.

In this article, we'll examine the secrets behind this exciting development in law enforcement and identity security. We'll also see how fingerprint scanner security systems stack up to conventional password and identity card systems, and find out how they can fail.

Fingerprint Basics

Fingerprints are one of those bizarre twists of nature. Human beings happen to have built-in, easily accessible identity cards. You have a unique design, which represents you alone, literally at your fingertips. How did this happen?

People have tiny ridges of skin on their fingers because this particular adaptation was extremely advantageous to the ancestors of the human species. The pattern of ridges and "valleys" on fingers make it easier for the hands to grip things, in the same way a rubber tread pattern helps a tire grip the road.


The other function of fingerprints is a total coincidence. Like everything in the human body, these ridges form through a combination of genetic and environmental factors. The genetic code in DNA gives general orders on the way skin should form in a developing fetus, but the specific way it forms is a result of random events. The exact position of the fetus in the womb at a particular moment and the exact composition and density of surrounding amniotic fluid decides how every individual ridge will form.

So, in addition to the countless things that go into deciding your genetic make-up in the first place, there are innumerable environmental factors influencing the formation of the fingers. Just like the weather conditions that form clouds or the coastline of a beach, the entire development process is so chaotic that, in the entire course of human history, there is virtually no chance of the same exact pattern forming twice.

Consequently, fingerprints are a unique marker for a person, even an identical twin. And while two prints may look basically the same at a glance, a trained investigator or an advanced piece of software can pick out clear, defined differences.

This is the basic idea of fingerprint analysis, in both crime investigation and security. A fingerprint scanner's job is to take the place of a human analyst by collecting a print sample and comparing it to other samples on record.

Optical Scanner

A fingerprint scanner system has two basic jobs -- it needs to get an image of your finger, and it needs to determine whether the pattern of ridges and valleys in this image matches the pattern of ridges and valleys in pre-scanned images.

There are a number of different ways to get an image of somebody's finger. The most common methods today are optical scanning and capacitance scanning. Both types come up with the same sort of image, but they go about it in completely different ways.

The heart of an optical scanner is a charge coupled device (CCD), the same light sensor system used in digital cameras and camcorders. A CCD is simply an array of light-sensitive diodes called photosites, which generate an electrical signal in response to light photons. Each photosite records a pixel, a tiny dot representing the light that hit that spot. Collectively, the light and dark pixels form an image of the scanned scene (a finger, for example). Typically, an analog-to-digital converter in the scanner system processes the analog electrical signal to generate a digital representation of this image. See How Digital Cameras Work for details on CCDs and digital conversion.

The scanning process starts when you place your finger on a glass plate, and a CCD camera takes a picture. The scanner has its own light source, typically an array of light-emitting diodes, to illuminate the ridges of the finger. The CCD system actually generates an inverted image of the finger, with darker areas representing more reflected light (the ridges of the finger) and lighter areas representing less reflected light (the valleys between the ridges).

Before comparing the print to stored data, the scanner processor makes sure the CCD has captured a clear image. It checks the average pixel darkness, or the overall values in a small sample, and rejects the scan if the overall image is too dark or too light. If the image is rejected, the scanner adjusts the exposure time to let in more or less light, and then tries the scan again.

If the darkness level is adequate, the scanner system goes on to check the image definition (how sharp the fingerprint scan is). The processor looks at several straight lines moving horizontally and vertically across the image. If the fingerprint image has good definition, a line running perpendicular to the ridges will be made up of alternating sections of very dark pixels and very light pixels.

If the processor finds that the image is crisp and properly exposed, it proceeds to comparing the captured fingerprint with fingerprints on file. We'll look at this process in a minute, but first we'll examine the other major scanning technology, the capacitive scanner.

Capacitance Scanner

Like optical scanners, capacitive fingerprint scanners generate an image of the ridges and valleys that make up a fingerprint. But instead of sensing the print using light, the capacitors use electrical current.

The diagram below shows a simple capacitive sensor. The sensor is made up of one or more semiconductor chips containing an array of tiny cells. Each cell includes two conductor plates, covered with an insulating layer. The cells are tiny -- smaller than the width of one ridge on a finger.


The sensor is connected to an integrator, an electrical circuit built around an inverting operational amplifier. The inverting amplifier is a complex semiconductor device, made up of a number of transistors, resistors and capacitors. The details of its operation would fill an entire article by itself, but here we can get a general sense of what it does in a capacitance scanner. (Check out this page on operational amplifiers for a technical overview.)

Like any amplifier, an inverting amplifier alters one current based on fluctuations in another current (see How Amplifiers Work for more information). Specifically, the inverting amplifier alters a supply voltage. The alteration is based on the relative voltage of two inputs, called the inverting terminal and the non-inverting terminal. In this case, the non-inverting terminal is connected to ground, and the inverting terminal is connected to a reference voltage supply and a feedback loop. The feedback loop, which is also connected to the amplifier output, includes the two conductor plates.

As you may have recognized, the two conductor plates form a basic capacitor, an electrical component that can store up charge (see How Capacitors Work for details). The surface of the finger acts as a third capacitor plate, separated by the insulating layers in the cell structure and, in the case of the fingerprint valleys, a pocket of air. Varying the distance between the capacitor plates (by moving the finger closer or farther away from the conducting plates) changes the total capacitance (ability to store charge) of the capacitor. Because of this quality, the capacitor in a cell under a ridge will have a greater capacitance than the capacitor in a cell under a valley.

To scan the finger, the processor first closes the reset switch for each cell, which shorts each amplifier's input and output to "balance" the integrator circuit. When the switch is opened again, and the processor applies a fixed charge to the integrator circuit, the capacitors charge up. The capacitance of the feedback loop's capacitor affects the voltage at the amplifier's input, which affects the amplifier's output. Since the distance to the finger alters capacitance, a finger ridge will result in a different voltage output than a finger valley.

The scanner processor reads this voltage output and determines whether it is characteristic of a ridge or a valley. By reading every cell in the sensor array, the processor can put together an overall picture of the fingerprint, similar to the image captured by an optical scanner.

The main advantage of a capacitive scanner is that it requires a real fingerprint-type shape, rather than the pattern of light and dark that makes up the visual impression of a fingerprint. This makes the system harder to trick. Additionally, since they use a semiconductor chip rather than a CCD unit, capacitive scanners tend to be more compact that optical devices.

Analysis

In movies and TV shows, automated fingerprint analyzers typically overlay various fingerprint images to find a match. In actuality, this isn't a particularly practical way to compare fingerprints. Smudging can make two images of the same print look pretty different, so you're rarely going to get a perfect image overlay. Additionally, using the entire fingerprint image in comparative analysis uses a lot of processing power, and it also makes it easier for somebody to steal the print data.

Instead, most fingerprint scanner systems compare specific features of the fingerprint, generally known as minutiae. Typically, human and computer investigators concentrate on points where ridge lines end or where one ridge splits into two (bifurcations). Collectively, these and other distinctive features are sometimes called typica.

The scanner system software uses highly complex algorithms to recognize and analyze these minutiae. The basic idea is to measure the relative positions of minutiae, in the same sort of way you might recognize a part of the sky by the relative positions of stars. A simple way to think of it is to consider the shapes that various minutia form when you draw straight lines between them. If two prints have three ridge endings and two bifurcations, forming the same shape with the same dimensions, there's a high likelihood they're from the same print.

To get a match, the scanner system doesn't have to find the entire pattern of minutiae both in the sample and in the print on record, it simply has to find a sufficient number of minutiae patterns that the two prints have in common. The exact number varies according to the scanner programming.

Pros and Cons

There are several ways a security system can verify that somebody is an authorized user. Most systems are looking for one or more of the following:
  • What you have
  • What you know
  • Who you are

To get past a "what you have" system, you need some sort of "token," such as an identity card with a magnetic strip. A "what you know" system requires you to enter a password or PIN number. A "who you are" system is actually looking for physical evidence that you are who you say you are -- a specific fingerprint, voice or iris pattern.

"Who you are" systems like fingerprint scanners have a number of advantages over other systems. To name few:

  • Physical attributes are much harder to fake than identity cards.
  • You can't guess a fingerprint pattern like you can guess a password.
  • You can't misplace your fingerprints, irises or voice like you can misplace an access card.
  • You can't forget your fingerprints like you can forget a password.

But, as effective as they are, they certainly aren't infallible, and they do have major disadvantages. Optical scanners can't always distinguish between a picture of a finger and the finger itself, and capacitive scanners can sometimes be fooled by a mold of a person's finger. If somebody did gain access to an authorized user's prints, the person could trick the scanner. In a worst-case scenario, a criminal could even cut off somebody's finger to get past a scanner security system. Some scanners have additional pulse and heat sensors to verify that the finger is alive, rather than a mold or dismembered digit, but even these systems can be fooled by a gelatin print mold over a real finger. (This site explains various ways somebody might trick a scanner.)

To make these security systems more reliable, it's a good idea to combine the biometric analysis with a conventional means of identification, such as a password (in the same way an ATM requires a bank card and a PIN code).

The real problem with biometric security systems is the extent of the damage when somebody does manage to steal the identity information. If you lose your credit card or accidentally tell somebody your secret PIN number, you can always get a new card or change your code. But if somebody steals your fingerprints, you're pretty much out of luck for the rest of your life. You wouldn't be able to use your prints as a form of identification until you were absolutely sure all copies had been destroyed. There's no way to get new prints.

But even with this significant drawback, fingerprint scanners and biometric systems are an excellent means of identification. In the future, they'll most likely become an integral part of most peoples' everyday life, just like keys, ATM cards and passwords are today.

Source : here

Web 3.0

You've decided to go see a movie and grab a bite to eat afterward. You're in the mood for a comedy and some incredibly spicy Mexican food. Booting up your PC, you open a Web browser and head to Google to search for theater, movie and restaurant information. You need to know which movies are playing in the theaters near you, so you spend some time reading short descriptions of each film before making your choice. Also, you want to see which Mexican restaurants are close to each of these theaters. And, you may want to check for customer reviews for the restaurants. In total, you visit half a dozen Web sites before you're ready to head out the door.Some Internet experts believe the next generation of the Web -- Web 3.0 -- will make tasks like your search for movies and food faster and easier. Instead of multiple searches, you might type a complex sentence or two in your Web 3.0 browser, and the Web will do the rest. In our example, you could type "I want to see a funny movie and then eat at a good Mexican restaurant. What are my options?" The Web 3.0 browser will analyze your response, search the Internet for all possible answers, and then organize the results for you.
­
That's not all. Many of these experts believe that the Web 3.0 browser will act like a personal assistant. As you search the Web, the browser learns what you are interested in. The more you use the Web, the more your browser learns about you and the less specific you'll need to be with your questions. Eventually you might be able to ask your browser open questions like "where should I go for lunch?" Your browser would consult its records of what you like and dislike, take into account your current location and then suggest a list of restaurants.

The Road to Web 3.0

Out of all the Internet buzzwords and jargon that have made the transition to the public consciousness, "Web 2.0" might be the best known. Even though a lot of people have heard about it, not many have any idea what Web 2.0 means. Some people claim that the term itself is nothing more than a marketing ploy designed to convince venture capitalists to invest millions of dollars into Web sites. It's true that when Dale Dougherty of O'Reilly Media came up with the term, there was no clear definition. There wasn't even any agreement about if there was a Web 1.0.

YouTubeYouTube is an example of a Web 2.0 site­

Other people insist that Web 2.0 is a reality. In brief, the characteristics of Web 2.0 include:

  • The ability for visitors to make changes to Web pages: Amazon allows visitors to post product reviews. Using an online form, a visitor can add information to Amazon's pages that future visitors will be able to read.
  • Using Web pages to link people to other users: Social networking sites like Facebook and MySpace are popular in part because they make it easy for users to find each other and keep in touch.
  • Fast and efficient ways to share content: YouTube is the perfect example. A YouTube member can create a video and upload it to the site for others to watch in less than an hour.
  • New ways to get information: Today, Internet surfers can subscribe to a Web page's Really Simple Syndication (RSS) feeds and receive notifications of that Web page's updates as long as they maintain an Internet connection.
  • Expanding access to the Internet beyond the computer: Many people access the Internet through devices like cell phones or video game consoles; before long, some experts expect that consumers will access the Internet through television sets and other devices.

Think of Web 1.0 as a library. You can use it as a source of information, but you can't contribute to or change the information in any way. Web 2.0 is more like a big group of friends and acquaintances. You can still use it to receive information, but you also contribute to the conversation and make it a richer experience.

While there are still many people trying to get a grip on Web 2.0, others are already beginning to think about what comes next. What will Web 3.0 be like?

Web 3.0 Basics

Internet experts think Web 3.0 is going to be like having a personal assistant who knows practically everything about you and can access all the information on the Internet to answer any question. Many compare Web 3.0 to a giant database. While Web 2.0 uses the Internet to make connections between people, Web 3.0 will use the Internet to make connections with information. Some experts see Web 3.0 replacing the current Web while others believe it will exist as a separate network.

tropical getaway
©iStockphoto/dstephens
Planning a tropical getaway? Web 3.0 might help simplify your travel plans.­

It's easier to get the concept with an example. Let's say that you're thinking about going on a vacation. You want to go someplace warm and tropical. You have set aside a budget of $3,000 for your trip. You want a nice place to stay, but you don't want it to take up too much of your budget. You also want a good deal on a flight.

With the Web technology currently available to you, you'd have to do a lot of research to find the best vacation options. You'd need to research potential destinations and decide which one is right for you. You might visit two or three discount travel sites and compare rates for flights and hotel rooms. You'd spend a lot of your time looking through results on various search engine results pages. The entire process could take several hours.

According to some Internet experts, with Web 3.0 you'll be able to sit back and let the Internet do all the work for you. You could use a search service and narrow the parameters of your search. The browser program then gathers, analyzes and presents the data to you in a way that makes comparison a snap. It can do this because Web 3.0 will be able to understand information on the Web.

Right now, when you use a Web search engine, the engine isn't able to really understand your search. It looks for Web pages that contain the keywords found in your search terms. The search engine can't tell if the Web page is actually relevant for your search. It can only tell that the keyword appears on the Web page. For example, if you searched for the term "Saturn," you'd end up with results for Web pages about the planet and others about the car manufacturer.

A Web 3.0 search engine could find not only the keywords in your search, but also interpret the context of your request. It would return relevant results and suggest other content related to your search terms. In our vacation example, if you typed "tropical vacation destinations under $3,000" as a search request, the Web 3.0 browser might include a list of fun activities or great restaurants related to the search results. It would treat the entire Internet as a massive database of information available for any query.

Web 3.0 Approaches

You never know how future technology will eventually turn out. In the case of Web 3.0, most Internet experts agree about its general traits. They believe that Web 3.0 will provide users with richer and more relevant experiences. Many also believe that with Web 3.0, every user will have a unique Internet profile based on that user's browsing history. Web 3.0 will use this profile to tailor the browsing experience to each individual. That means that if two different people each performed an Internet search with the same keywords using the same service, they'd receive different results determined by their individual profiles.

cable modem and Earth
©iStockphoto/ktsimage
Web 3.0 will likely plug into your individual tastes and browsing habits.­

The technologies and software required for this kind of application aren't yet mature. Services like TiVO and Pandora provide individualized content based on user input, but they both rely on a trial-and-error approach that isn't as efficient as what the experts say Web 3.0 will be. More importantly, both TiVO and Pandora have a limited scope -- television shows and music, respectively -- whereas Web 3.0 will involve all the information on the Internet.

Some experts believe that the foundation for Web 3.0 will be application programming interfaces (APIs). An API is an interface designed to allow developers to create applications that take advantage of a certain set of resources. Many Web 2.0 sites include APIs that give programmers access to the sites' unique data and capabilities. For example, Facebook's API allows developers to create programs that use Facebook as a staging ground for games, quizzes, product reviews and more.

One Web 2.0 trend that could help the development of Web 3.0 is the mashup. A mashup is the combination of two or more applications into a single application. For example, a developer might combine a program that lets users review restaurants with Google Maps. The new mashup application could show not only restaurant reviews, but also map them out so that the user could see the restaurants' locations. Some Internet experts believe that creating mashups will be so easy in Web 3.0 that anyone will be able to do it.

Other experts think that Web 3.0 will start fresh. Instead of using HTML as the basic coding language, it will rely on some new -- and unnamed -- language. These experts suggest it might be easier to start from scratch rather than try to change the current Web. However, this version of Web 3.0 is so theoretical that it's practically impossible to say how it will work.

The man responsible for the World Wide Web has his own theory of what the future of the Web will be. He calls it the Semantic Web, and many Internet experts borrow heavily from his work when talking about Web 3.0.

Making a Semantic Web

Tim Berners-Lee invented the World Wide Web in 1989. He created it as an interface for the Internet and a way for people to share information with one another. Berners-Lee disputes the existence of Web 2.0, calling it nothing more than meaningless jargon [source: Register]. Berners-Lee maintains that he intended the World Wide Web to do all the things that Web 2.0 is supposed to do.

Tim Berners-Lee
Catrina Genovese/Getty Images

Tim Berners-Lee, the inventor of the World Wide Web­

Berners-Lee's vision of the future Web is similar to the concept of Web 3.0. It's called the Semantic Web. Right now, the Web's structure is geared for humans. It's easy for us to visit a Web page and understand what it's all about. Computers can't do that. A search engine might be able to scan for keywords, but it can't understand how those keywords are used in the context of the page.

With the Semantic Web, computers will scan and interpret information on Web pages using software agents. These software agents will be programs that crawl through the Web, searching for relevant information. They'll be able to do that because the Semantic Web will have collections of information called ontologies. In terms of the Internet, an ontology is a file that defines the relationships among a group of terms. For example, the term "cousin" refers to the familial relationship between two people who share one set of grandparents. A Semantic Web ontology might define each familial role like this:

  • Grandparent: A direct ancestor two generations removed from the subject
  • Parent: A direct ancestor one generation removed from the subject
  • Brother or sister: Someone who shares the same parent as the subject
  • Nephew or niece: Child of the brother or sister of the subject
  • Aunt or uncle: Sister or brother to a parent of the subject
  • Cousin: child of an aunt or uncle of the subject

For the Semantic Web to be effective, ontologies have to be detailed and comprehensive. In Berners-Lee's concept, they would exist in the form of metadata. Metadata is information included in the code for Web pages that is invisible to humans, but readable by computers.

Constructing ontologies takes a lot of work. In fact, that's one of the big obstacles the Semantic Web faces. Will people be willing to put in the effort required to make comprehensive ontologies for their Web sites? Will they maintain them as the Web sites change? Critics suggest that the task of creating and maintaining such complex files is too much work for most people.

On the other hand, some people really enjoy labeling or tagging Web objects and information. Web tags categorize the tagged object or information. Several blogs include a tag option, making it easy to classify journal entries under specific topics. Photo sharing sites like Flickr allow users to tag pictures. Google even has turned it into a game: Google Image Labeler pits two people against each other in a labeling contest. Each player tries to create the largest number of relevant tags for a series of images. According to some experts, Web 3.0 will be able to search tags and labels and return the most relevant results back to the user. Perhaps Web 3.0 will combine Berners-Lee's concept of the Semantic Web with Web 2.0's tagging culture.

Even though Web 3.0 is more theory than reality, that hasn't stopped people from guessing what will come next.

Beyond Web 3.0

Whatever we call the next generation of the Web, what will come after it? Theories range from conservative predictions to guesses that sound more like science fiction films.

Paul Otellini at CES
David Paul Morris/Getty Images

Paul Otellini, CEO and President of Intel, discusses the increasing importance of mobile devices on the Web at the 2008 International Consumer Electronics Show.­

Here are just a few:

  • According to technology expert and entrepreneur Nova Spivack, the development of the Web moves in 10-year cycles. In the Web's first decade, most of the development focused on the back end, or infrastructure, of the Web. Programmers created the protocols and code languages we use to make Web pages. In the second decade, focus shifted to the front end and the era of Web 2.0 began. Now people use Web pages as platforms for other applications. They also create mashups and experiment with ways to make Web experiences more interactive. We're at the end of the Web 2.0 cycle now. The next cycle will be Web 3.0, and the focus will shift back to the back end. Programmers will refine the Internet's infrastructure to support the advanced capabilities of Web 3.0 browsers. Once that phase ends, we'll enter the era of Web 4.0. Focus will return to the front end, and we'll see thousands of new programs that use Web 3.0 as a foundation [source: Nova Spivack].
  • The Web will evolve into a three-dimensional environment. Rather than a Web 3.0, we'll see a Web 3D. Combining virtual reality elements with the persistent online worlds of massively multiplayer online roleplaying games (MMORPGs), the Web could become a digital landscape that incorporates the illusion of depth. You'd navigate the Web either from a first-person perspective or through a digital representation of yourself called an avatar (to learn more about an avatar's perspective, read How the Avatar Machine Works).
  • The Web will build on developments in distributed computing and lead to true artificial intelligence. In distributed computing, several computers tackle a large processing job. Each computer handles a small part of the overall task. Some people believe the Web will be able to think by distributing the workload across thousands of computers and referencing deep ontologies. The Web will become a giant brain capable of analyzing data and extrapolating new ideas based off of that information.
  • The Web will extend far beyond computers and cell phones. Everything from watches to television sets to clothing will connect to the Internet. Users will have a constant connection to the Web, and vice versa. Each user's software agent will learn more about its respective user by electronically observing his or her activities. This might lead to debates about the balance between individual privacy and the benefit of having a personalized Web browsing experience.
  • The Web will merge with other forms of entertainment until all distinctions between the forms of media are lost. Radio programs, television shows and feature films will rely on the Web as a delivery system.

It's too early to tell which (if any) of these future versions of the Web will come true. It may be that the real future of the Web is even more extravagant than the most extreme predictions. We can only hope that by the time the future of the Web gets here, we can all agree on what to call it.

Source : here

Jumat, 05 Maret 2010

How Web Operating Systems Work

As the Web evolves, people invent new words to describe its features and applications. Sometimes, a term gains widespread acceptance even if some people believe it's misleading or inaccurate. Such is the case with Web operating systems.

AstraNOS
2008 ©HowStuffWorks
The AstraNOS operating system login screen.­

An operating system (OS) is a special kind of program that organizes and controls computer hardware and software. Operating systems interact directly with computer hardware and serve as a platform for other applications. Whether it's Windows, Linux, Unix or Mac OS X, your computer depends on its OS to function.

That's why some people object to the term Web OS. A Web OS is a user interface (UI) that allows people to access applications stored completely or in part on the Web. It might mimic the user interface of traditional computer operating systems like Windows, but it doesn't interact directly with the computer's hardware. The user must still have a traditional OS on his or her computer.

While there aren't many computer operating systems to choose from, the same can't be said of Web operating systems. There are dozens of Web operating systems available. Some of them offer a wide range of services, while others are still in development and only provide limited functionality. In some cases, there may be a single ambitious programmer behind the project. Other Web operating systems are the product of a large team effort. Some are free to download, and others charge a fee. Web operating systems can come in all shapes and sizes.

What do Web operating systems do?

Web operating systems are interfaces to distributed computing systems, particularly cloud or utility computing systems. In these systems, a company provides computer services to users through an Internet connection. The provider runs a system of computers that include application servers and databases.

With some systems, people access the applications using Web browsers like Firefox or Internet Explorer. With other systems, users must download a program that creates a system-specific client. A client is software that accesses information or services from other software. In either case, users access programs that are stored not on their own computers, but on the Web.

What sort of services do they provide? Web operating systems can give users access to practically any program they could run on a computer's desktop. Common applications include:

  • Calendars
  • E-mail
  • File management
  • Games
  • Instant messaging programs
  • Photo, video and audio editing programs
  • RSS readers
  • Spreadsheet programs
  • Word processing programs

With traditional computer operating systems, you'd have to install applications to your own computer. The applications would exist on your computer's hard disk drive. They would run by accessing the processing power of your computer's central processing unit (CPU) by sending electronic requests to your computer's OS.

Web operating systems can't replace your computer's native OS -- in fact, they depend on traditional computer operating systems to work. The user side of Web OS software, whether it's a Web browser or a system-specific client, runs on top of your computer's OS. But programmers design Web operating systems to look and act like a desktop OS. A Web OS might look a lot like a traditional OS, but it doesn't manage your computer's hardware or software.

iGoogle
©2008 HowStuffWorks­
Portals like iGoogle aren't true operating systems, but they do pull information from other Web pages into a centralized site.

A Web OS allows you to access applications stored not on your computer, but on the Web. The applications exist wholly or in part on Web servers within a particular provider network. When you save information in an application, you might not store it on your computer. Instead, you save the information to databases connected to the Internet. Some Web operating systems also give you the option to save information to your local hard disk drive.

Because Web operating systems aren't tied to a specific computer or device, you can access Web applications and data from any device connected to the Internet. That is, you can do it as long as the device can run the Web operating software (whether that's a particular Web browser or client). This means that you can access the Web OS on one computer, create a document, save the work and then access it again later using a completely different machine. Web operating systems offer users the benefit of accessibility -- data isn't tied down to your computer.

The Technology of Web Operating Systems


With so many different Web operating systems either currently available or in development, it should come as no surprise that programmers use different approaches to achieve the same effect. While the goal of a Web OS is to provide an experience similar to using a desktop OS, there are no hard and fast rules for how to make that happen. The two most popular approaches rely on Flash technologies or Asynchronous JavaScript and XML (AJAX) technologies.

Flash is a set of technologies that enable programmers to create interactive Web pages. It's a technology that uses vector graphics. Vector graphics record image data as a collection of shapes and lines rather than individual pixels, which allows computers to load Flash images and animation faster than pixel-based graphics.

Flash files stream over the Internet, which means the end user accessing the file doesn't have to wait for the entire file to download to his or her computer before accessing parts of it. With Flash-based programs like YouTube's video player, this means you can start watching a film clip without having to download it first.

More than 98 percent of all computers connected to the Internet have a Flash player installed [source: Adobe]. That makes Flash an attractive approach for many programmers. They can create a Web OS knowing that the vast majority of computer users will be able to access it without having to download additional software.

AJAX technologies rely on hypertext markup language (HTML), the JavaScript programming language, Cascading Style Sheets (CSS) and eXtensible Markup Language (XML). It's a browser technology. The HTML language is a collection of markup tags programmers use on text files that tell Web browsers how to display the text file as a Web page. CSS is a tool that gives programmers more options when tweaking a Web site's appearance. Programmers can create a style sheet with certain attributes such as font style and color, and then apply those styles across several Web pages at once. JavaScript is a programming language that allows applications to send information back and forth between servers and browsers. XML is a markup language, which means programmers use it to describe the structure of information within a file and how it relates to other information.

The "asynchronous" aspect of AJAX means that AJAX applications transfer data between servers and browsers in small bits of information as needed. The alternative is to send an entire Web page to the browser every time something changes, which would significantly slow down the user's experience. With sufficient skill and knowledge, a programmer can create an AJAX application with the same functions as a desktop application.

Like Flash, most computers can run AJAX applications. That's because AJAX isn't a new programming language but rather a way to use established Web standards to create new applications. As long as an application programmer includes the right information in an application's code, it should run fine on any major Web browser. Some well known Web applications based on AJAX include Google Calendar and Gmail.

Why Use a Web OS?


Web operating systems simplify a user's experience when accessing applications hosted on remote servers. Ideally, a Web OS behaves like a desktop OS. The more familiar and intuitive the system, the faster people will learn how to use it. When a person chooses to run a certain application, his or her computer sends a request to the system's control node -- a special server that acts as a system administrator. The control node interprets the request and connects the user's client to the appropriate application server or database. By offloading applications, storage and processing power to a remote network, users don't have to worry about upgrading computer systems every few years.
YouOS
©2008 HowStuffWorks
YouOS is one of the more popular Web operating systems on the Internet.­

For many people, that's the most attractive feature of Web operating systems. As long as their computers can run the browser or client software necessary to access the system, there's no need to upgrade. Some people become frustrated when they have to purchase new computers in order to run current software. With distributed computing, it's the provider's responsibility to provide application functionality. If the provider isn't able to meet user demands, users might look elsewhere for services.

Web operating systems can also make it easier to share data between computers. Perhaps you own both a Mac computer and a PC. It can be challenging to share data between the two different computers. Even if you use file formats that are compatible with both Mac computers and PCs, you could end up with a copy of the same file on each machine. Changing one copy isn't reflected on the other computer's copy. Web operating systems provide an interface where you can use any computer to create, modify and access a single copy of a file saved on a remote database. As long as the Web OS you're using can cross platforms, meaning it works on both Macs and PCs, you'll be able to work on the file at any time using either of your computers.

Likewise, Web operating systems can simplify collaborative projects. Many Web operating systems allow users to share files. Each user can work from the file saved to the system's native network. For many users, this is an attractive alternative to organizing multiple copies of the same file and then incorporating everyone's changes into a new version.

Source : here

Byte Prefixes and Binary Math

When you start talking about lots of bytes, you get into prefixes like kilo, mega and giga, as in kilobyte, megabyte and gigabyte (also shortened to K, M and G, as in Kbytes, Mbytes and Gbytes or KB, MB and GB). The following table shows the binary multipliers:

Name
Abbr.
Size
Kilo
K
2^10 = 1,024
Mega
M
2^20 = 1,048,576
Giga
G
2^30 = 1,073,741,824
Tera
T
2^40 = 1,099,511,627,776
Peta
P
2^50 = 1,125,899,906,842,624
Exa
E
2^60 = 1,152,921,504,606,846,976
Zetta
Z
2^70 = 1,180,591,620,717,411,303,424
Yotta
Y
2^80 = 1,208,925,819,614,629,174,706,176


You can see in this chart that kilo is about a thousand, mega is about a million, giga is about a billion, and so on. So when someone says, "This computer has a 2 gig hard drive," what he or she means is that the hard drive stores 2 gigabytes, or approximately 2 billion bytes, or exactly 2,147,483,648 bytes. How could you possibly need 2 gigabytes of space? When you consider that one CD holds 650 megabytes, you can see that just three CDs worth of data will fill the whole thing! Terabyte databases are fairly common these days, and there are probably a few petabyte databases floating around the Pentagon by now.

Binary math works just like decimal math, except that the value of each bit can be only 0 or 1. To get a feel for binary math, let's start with decimal addition and see how it works. Assume that we want to add 452 and 751:

  452
+ 751
---
1203


To add these two numbers together, you start at the right: 2 + 1 = 3. No problem. Next, 5 + 5 = 10, so you save the zero and carry the 1 over to the next place. Next, 4 + 7 + 1 (because of the carry) = 12, so you save the 2 and carry the 1. Finally, 0 + 0 + 1 = 1. So the answer is 1203.

Binary addition works exactly the same way:

  010
+ 111
---
1001

Starting at the right, 0 + 1 = 1 for the first digit. No carrying there. You've got 1 + 1 = 10 for the second digit, so save the 0 and carry the 1. For the third digit, 0 + 1 + 1 = 10, so save the zero and carry the 1. For the last digit, 0 + 0 + 1 = 1. So the answer is 1001. If you translate everything over to decimal you can see it is correct: 2 + 7 = 9.

To sum up, here's what we've learned about bits and bytes:

  • Bits are binary digits. A bit can hold the value 0 or 1.
  • Bytes are made up of 8 bits each.
  • Binary math works just like decimal math, but each bit can have a value of only 0 or 1.
Source : here