Sabtu, 13 Maret 2010

How to Set Up SCTP in Linux

Before we start to set up SCTP in Linux, Fedora 12 with kernel over 2.6.31 should be prepared. Because Fedora 12 has SCTP kernel as the kernel module, kernel recompile is not needed for our case. Instead, the SCTP kernel module is simply needed to be loaded into the RAM memory on the Fedora 12 with the command ‘modprobe’. The command ‘modprobe SCTP’ plays a role on loading SCTP module into the RAM; see Figure 1


Figure 1 Load SCTP module into the kernel

The SCTP module should be loaded on both the server and the client. After that, it can be assumed that both the server and client have already configured theLinux platform so that they are capable of supporting the SCTP protocol. The next step is to activate the DAR extension of SCTP, to ensure that mSCTP is supported by Linux. The parameter ‘addip_enable’ is the indicator whether DAR extension is active or not. When ‘addipenable’ is 0, Add-IP extension is inactive while it is active when ‘addip-enable’ is 1.

Command ‘echo 1>/proc/sys/net/sctp/addip_enable’ is used to make Linux support mSCTP. Command ‘more /proc/sys/net/sctp/addip_enable’ approves the information.

See Figure 2:

Figure 5.3 Active Add-IP extension of SCTP

One problem with the SCTP protocol in Linux is that it does not support SCTP APIs itself, while SCTP APIs are required to be used for coding the mSCTP handover. At this point, we downloaded an additional tool from http://sourceforge.net/projects/lksctp/files/ called LKSCTP, which is able to provide SCTP API functions. There are many versions of the LKSCTP tool, the latest one is 1.0.11. The one used in our testbed is version 1.0.10. The following steps have been taken to build LKSCTP in Linux:
  • Become root user to install LKSCTP by command: su –
  • Enter the LKSCTP directory containing the download RPM files of LKSCTP by command cd /root/fde13 (my own directory).
  • Install the RPM flies by command: rpm *.lksctp-tools-1.0.10-1.rpm
  • Untar the LKSCTP tools directory from the gzipped tarball by command: tar –xzvf lksctp-tools-1.0.10.tar
  • Enter the LKSCTP tool directory by command: cd /lksctp-tools-1.0.10
  • configure LKSCTP by command: ./configure
  • make LKSCTP by command: make
After the success of “make” operation, the LKSCTP tools has been loaded into the Linux
kernel. The following Figure shows how to check whether LKSCTP is supported by
Linux or not.

Figure 5.4 LKSCTP tools for Linux

In Figure 5.4, the command ‘checksctp’ indicates whether the server and the client support LKSCTP or not. The result shows that both of them support LKSCTP.

Senin, 08 Maret 2010

Soft Handover Procedure in mSCTP

In this section, we describe how to use mSCTP for soft handover in the transport layer. For an example, we consider a mobile node (MN) that initiates an SCTP association with a correspondent node (CN) in IPv6 networks. The case in IPv4 has similar procedures as those in IPv6 networks. After initiation of an SCTP association, the MN moves from access router A to access router B, as shown in Fig. 1.


Figure 1

It is assumed that an MN initiates an association with a CN. The resulting SCTP association consists of IP address 2 for MN and IP address 1 for CN. Then the procedural steps described
below, from Step 1 through 4, will be repeated whenever the MN moves to a new location, until the SCTP association will be released.

Step 1) Obtaining an IP address for a new location: Let us assume that the MN moves from AR A to AR B and thus it is now in the overlapping region. In this phase, we also need to assume that the MN can obtain a new IP address 3 from the AR B by using IPv6 stateless
address configuration.

Step 2) Adding the new IP address to the SCTP association: After obtaining a new IP address, the MN’s SCTP informs the CN’s SCTP that it will use a new IP address. This is done by sending an SCTP ASCONF chunk to the CN. The MN receives the responding ASCONF-ACK Chunk from the CN.

Step 3) Changing the primary IP address: While the MN further continues to move toward AR B, it needs to change the new IP address into its primary IP address according at an appropriate rule. Actually, the configuration of a specific rule to trigger this “primary address change” is a challenging issue of the mSCTP.

Step 4) Deleting the old IP address from the SCTP association: As the MN progresses to move toward AR B, if the old IP address gets inactive, the MN must delete it from the address list. The rule for determining if the IP address is inactive may also be implemented by using information from the underlying network
or physical layer.

Source : mSCTP for Soft Handover in Transport Layer, Seok Joo Koh, Moon Jeong Chang, and Meejeong Lee, Member, IEEE

How Fingerprint Scanners Work

Computerized fingerprint scanners have been a mainstay of spy thrillers for decades, but up until recently, they were pretty exotic technology in the real world. In the past few years, however, scanners have started popping up all over the place -- in police stations, high-security buildings and even on PC keyboards. You can pick up a personal USB fingerprint scanner for less than $100, and just like that, your computer's guarded by high-tech biometrics. Instead of, or in addition to, a password, you need your distinctive print to gain access.

In this article, we'll examine the secrets behind this exciting development in law enforcement and identity security. We'll also see how fingerprint scanner security systems stack up to conventional password and identity card systems, and find out how they can fail.

Fingerprint Basics

Fingerprints are one of those bizarre twists of nature. Human beings happen to have built-in, easily accessible identity cards. You have a unique design, which represents you alone, literally at your fingertips. How did this happen?

People have tiny ridges of skin on their fingers because this particular adaptation was extremely advantageous to the ancestors of the human species. The pattern of ridges and "valleys" on fingers make it easier for the hands to grip things, in the same way a rubber tread pattern helps a tire grip the road.


The other function of fingerprints is a total coincidence. Like everything in the human body, these ridges form through a combination of genetic and environmental factors. The genetic code in DNA gives general orders on the way skin should form in a developing fetus, but the specific way it forms is a result of random events. The exact position of the fetus in the womb at a particular moment and the exact composition and density of surrounding amniotic fluid decides how every individual ridge will form.

So, in addition to the countless things that go into deciding your genetic make-up in the first place, there are innumerable environmental factors influencing the formation of the fingers. Just like the weather conditions that form clouds or the coastline of a beach, the entire development process is so chaotic that, in the entire course of human history, there is virtually no chance of the same exact pattern forming twice.

Consequently, fingerprints are a unique marker for a person, even an identical twin. And while two prints may look basically the same at a glance, a trained investigator or an advanced piece of software can pick out clear, defined differences.

This is the basic idea of fingerprint analysis, in both crime investigation and security. A fingerprint scanner's job is to take the place of a human analyst by collecting a print sample and comparing it to other samples on record.

Optical Scanner

A fingerprint scanner system has two basic jobs -- it needs to get an image of your finger, and it needs to determine whether the pattern of ridges and valleys in this image matches the pattern of ridges and valleys in pre-scanned images.

There are a number of different ways to get an image of somebody's finger. The most common methods today are optical scanning and capacitance scanning. Both types come up with the same sort of image, but they go about it in completely different ways.

The heart of an optical scanner is a charge coupled device (CCD), the same light sensor system used in digital cameras and camcorders. A CCD is simply an array of light-sensitive diodes called photosites, which generate an electrical signal in response to light photons. Each photosite records a pixel, a tiny dot representing the light that hit that spot. Collectively, the light and dark pixels form an image of the scanned scene (a finger, for example). Typically, an analog-to-digital converter in the scanner system processes the analog electrical signal to generate a digital representation of this image. See How Digital Cameras Work for details on CCDs and digital conversion.

The scanning process starts when you place your finger on a glass plate, and a CCD camera takes a picture. The scanner has its own light source, typically an array of light-emitting diodes, to illuminate the ridges of the finger. The CCD system actually generates an inverted image of the finger, with darker areas representing more reflected light (the ridges of the finger) and lighter areas representing less reflected light (the valleys between the ridges).

Before comparing the print to stored data, the scanner processor makes sure the CCD has captured a clear image. It checks the average pixel darkness, or the overall values in a small sample, and rejects the scan if the overall image is too dark or too light. If the image is rejected, the scanner adjusts the exposure time to let in more or less light, and then tries the scan again.

If the darkness level is adequate, the scanner system goes on to check the image definition (how sharp the fingerprint scan is). The processor looks at several straight lines moving horizontally and vertically across the image. If the fingerprint image has good definition, a line running perpendicular to the ridges will be made up of alternating sections of very dark pixels and very light pixels.

If the processor finds that the image is crisp and properly exposed, it proceeds to comparing the captured fingerprint with fingerprints on file. We'll look at this process in a minute, but first we'll examine the other major scanning technology, the capacitive scanner.

Capacitance Scanner

Like optical scanners, capacitive fingerprint scanners generate an image of the ridges and valleys that make up a fingerprint. But instead of sensing the print using light, the capacitors use electrical current.

The diagram below shows a simple capacitive sensor. The sensor is made up of one or more semiconductor chips containing an array of tiny cells. Each cell includes two conductor plates, covered with an insulating layer. The cells are tiny -- smaller than the width of one ridge on a finger.


The sensor is connected to an integrator, an electrical circuit built around an inverting operational amplifier. The inverting amplifier is a complex semiconductor device, made up of a number of transistors, resistors and capacitors. The details of its operation would fill an entire article by itself, but here we can get a general sense of what it does in a capacitance scanner. (Check out this page on operational amplifiers for a technical overview.)

Like any amplifier, an inverting amplifier alters one current based on fluctuations in another current (see How Amplifiers Work for more information). Specifically, the inverting amplifier alters a supply voltage. The alteration is based on the relative voltage of two inputs, called the inverting terminal and the non-inverting terminal. In this case, the non-inverting terminal is connected to ground, and the inverting terminal is connected to a reference voltage supply and a feedback loop. The feedback loop, which is also connected to the amplifier output, includes the two conductor plates.

As you may have recognized, the two conductor plates form a basic capacitor, an electrical component that can store up charge (see How Capacitors Work for details). The surface of the finger acts as a third capacitor plate, separated by the insulating layers in the cell structure and, in the case of the fingerprint valleys, a pocket of air. Varying the distance between the capacitor plates (by moving the finger closer or farther away from the conducting plates) changes the total capacitance (ability to store charge) of the capacitor. Because of this quality, the capacitor in a cell under a ridge will have a greater capacitance than the capacitor in a cell under a valley.

To scan the finger, the processor first closes the reset switch for each cell, which shorts each amplifier's input and output to "balance" the integrator circuit. When the switch is opened again, and the processor applies a fixed charge to the integrator circuit, the capacitors charge up. The capacitance of the feedback loop's capacitor affects the voltage at the amplifier's input, which affects the amplifier's output. Since the distance to the finger alters capacitance, a finger ridge will result in a different voltage output than a finger valley.

The scanner processor reads this voltage output and determines whether it is characteristic of a ridge or a valley. By reading every cell in the sensor array, the processor can put together an overall picture of the fingerprint, similar to the image captured by an optical scanner.

The main advantage of a capacitive scanner is that it requires a real fingerprint-type shape, rather than the pattern of light and dark that makes up the visual impression of a fingerprint. This makes the system harder to trick. Additionally, since they use a semiconductor chip rather than a CCD unit, capacitive scanners tend to be more compact that optical devices.

Analysis

In movies and TV shows, automated fingerprint analyzers typically overlay various fingerprint images to find a match. In actuality, this isn't a particularly practical way to compare fingerprints. Smudging can make two images of the same print look pretty different, so you're rarely going to get a perfect image overlay. Additionally, using the entire fingerprint image in comparative analysis uses a lot of processing power, and it also makes it easier for somebody to steal the print data.

Instead, most fingerprint scanner systems compare specific features of the fingerprint, generally known as minutiae. Typically, human and computer investigators concentrate on points where ridge lines end or where one ridge splits into two (bifurcations). Collectively, these and other distinctive features are sometimes called typica.

The scanner system software uses highly complex algorithms to recognize and analyze these minutiae. The basic idea is to measure the relative positions of minutiae, in the same sort of way you might recognize a part of the sky by the relative positions of stars. A simple way to think of it is to consider the shapes that various minutia form when you draw straight lines between them. If two prints have three ridge endings and two bifurcations, forming the same shape with the same dimensions, there's a high likelihood they're from the same print.

To get a match, the scanner system doesn't have to find the entire pattern of minutiae both in the sample and in the print on record, it simply has to find a sufficient number of minutiae patterns that the two prints have in common. The exact number varies according to the scanner programming.

Pros and Cons

There are several ways a security system can verify that somebody is an authorized user. Most systems are looking for one or more of the following:
  • What you have
  • What you know
  • Who you are

To get past a "what you have" system, you need some sort of "token," such as an identity card with a magnetic strip. A "what you know" system requires you to enter a password or PIN number. A "who you are" system is actually looking for physical evidence that you are who you say you are -- a specific fingerprint, voice or iris pattern.

"Who you are" systems like fingerprint scanners have a number of advantages over other systems. To name few:

  • Physical attributes are much harder to fake than identity cards.
  • You can't guess a fingerprint pattern like you can guess a password.
  • You can't misplace your fingerprints, irises or voice like you can misplace an access card.
  • You can't forget your fingerprints like you can forget a password.

But, as effective as they are, they certainly aren't infallible, and they do have major disadvantages. Optical scanners can't always distinguish between a picture of a finger and the finger itself, and capacitive scanners can sometimes be fooled by a mold of a person's finger. If somebody did gain access to an authorized user's prints, the person could trick the scanner. In a worst-case scenario, a criminal could even cut off somebody's finger to get past a scanner security system. Some scanners have additional pulse and heat sensors to verify that the finger is alive, rather than a mold or dismembered digit, but even these systems can be fooled by a gelatin print mold over a real finger. (This site explains various ways somebody might trick a scanner.)

To make these security systems more reliable, it's a good idea to combine the biometric analysis with a conventional means of identification, such as a password (in the same way an ATM requires a bank card and a PIN code).

The real problem with biometric security systems is the extent of the damage when somebody does manage to steal the identity information. If you lose your credit card or accidentally tell somebody your secret PIN number, you can always get a new card or change your code. But if somebody steals your fingerprints, you're pretty much out of luck for the rest of your life. You wouldn't be able to use your prints as a form of identification until you were absolutely sure all copies had been destroyed. There's no way to get new prints.

But even with this significant drawback, fingerprint scanners and biometric systems are an excellent means of identification. In the future, they'll most likely become an integral part of most peoples' everyday life, just like keys, ATM cards and passwords are today.

Source : here

Web 3.0

You've decided to go see a movie and grab a bite to eat afterward. You're in the mood for a comedy and some incredibly spicy Mexican food. Booting up your PC, you open a Web browser and head to Google to search for theater, movie and restaurant information. You need to know which movies are playing in the theaters near you, so you spend some time reading short descriptions of each film before making your choice. Also, you want to see which Mexican restaurants are close to each of these theaters. And, you may want to check for customer reviews for the restaurants. In total, you visit half a dozen Web sites before you're ready to head out the door.Some Internet experts believe the next generation of the Web -- Web 3.0 -- will make tasks like your search for movies and food faster and easier. Instead of multiple searches, you might type a complex sentence or two in your Web 3.0 browser, and the Web will do the rest. In our example, you could type "I want to see a funny movie and then eat at a good Mexican restaurant. What are my options?" The Web 3.0 browser will analyze your response, search the Internet for all possible answers, and then organize the results for you.
­
That's not all. Many of these experts believe that the Web 3.0 browser will act like a personal assistant. As you search the Web, the browser learns what you are interested in. The more you use the Web, the more your browser learns about you and the less specific you'll need to be with your questions. Eventually you might be able to ask your browser open questions like "where should I go for lunch?" Your browser would consult its records of what you like and dislike, take into account your current location and then suggest a list of restaurants.

The Road to Web 3.0

Out of all the Internet buzzwords and jargon that have made the transition to the public consciousness, "Web 2.0" might be the best known. Even though a lot of people have heard about it, not many have any idea what Web 2.0 means. Some people claim that the term itself is nothing more than a marketing ploy designed to convince venture capitalists to invest millions of dollars into Web sites. It's true that when Dale Dougherty of O'Reilly Media came up with the term, there was no clear definition. There wasn't even any agreement about if there was a Web 1.0.

YouTubeYouTube is an example of a Web 2.0 site­

Other people insist that Web 2.0 is a reality. In brief, the characteristics of Web 2.0 include:

  • The ability for visitors to make changes to Web pages: Amazon allows visitors to post product reviews. Using an online form, a visitor can add information to Amazon's pages that future visitors will be able to read.
  • Using Web pages to link people to other users: Social networking sites like Facebook and MySpace are popular in part because they make it easy for users to find each other and keep in touch.
  • Fast and efficient ways to share content: YouTube is the perfect example. A YouTube member can create a video and upload it to the site for others to watch in less than an hour.
  • New ways to get information: Today, Internet surfers can subscribe to a Web page's Really Simple Syndication (RSS) feeds and receive notifications of that Web page's updates as long as they maintain an Internet connection.
  • Expanding access to the Internet beyond the computer: Many people access the Internet through devices like cell phones or video game consoles; before long, some experts expect that consumers will access the Internet through television sets and other devices.

Think of Web 1.0 as a library. You can use it as a source of information, but you can't contribute to or change the information in any way. Web 2.0 is more like a big group of friends and acquaintances. You can still use it to receive information, but you also contribute to the conversation and make it a richer experience.

While there are still many people trying to get a grip on Web 2.0, others are already beginning to think about what comes next. What will Web 3.0 be like?

Web 3.0 Basics

Internet experts think Web 3.0 is going to be like having a personal assistant who knows practically everything about you and can access all the information on the Internet to answer any question. Many compare Web 3.0 to a giant database. While Web 2.0 uses the Internet to make connections between people, Web 3.0 will use the Internet to make connections with information. Some experts see Web 3.0 replacing the current Web while others believe it will exist as a separate network.

tropical getaway
©iStockphoto/dstephens
Planning a tropical getaway? Web 3.0 might help simplify your travel plans.­

It's easier to get the concept with an example. Let's say that you're thinking about going on a vacation. You want to go someplace warm and tropical. You have set aside a budget of $3,000 for your trip. You want a nice place to stay, but you don't want it to take up too much of your budget. You also want a good deal on a flight.

With the Web technology currently available to you, you'd have to do a lot of research to find the best vacation options. You'd need to research potential destinations and decide which one is right for you. You might visit two or three discount travel sites and compare rates for flights and hotel rooms. You'd spend a lot of your time looking through results on various search engine results pages. The entire process could take several hours.

According to some Internet experts, with Web 3.0 you'll be able to sit back and let the Internet do all the work for you. You could use a search service and narrow the parameters of your search. The browser program then gathers, analyzes and presents the data to you in a way that makes comparison a snap. It can do this because Web 3.0 will be able to understand information on the Web.

Right now, when you use a Web search engine, the engine isn't able to really understand your search. It looks for Web pages that contain the keywords found in your search terms. The search engine can't tell if the Web page is actually relevant for your search. It can only tell that the keyword appears on the Web page. For example, if you searched for the term "Saturn," you'd end up with results for Web pages about the planet and others about the car manufacturer.

A Web 3.0 search engine could find not only the keywords in your search, but also interpret the context of your request. It would return relevant results and suggest other content related to your search terms. In our vacation example, if you typed "tropical vacation destinations under $3,000" as a search request, the Web 3.0 browser might include a list of fun activities or great restaurants related to the search results. It would treat the entire Internet as a massive database of information available for any query.

Web 3.0 Approaches

You never know how future technology will eventually turn out. In the case of Web 3.0, most Internet experts agree about its general traits. They believe that Web 3.0 will provide users with richer and more relevant experiences. Many also believe that with Web 3.0, every user will have a unique Internet profile based on that user's browsing history. Web 3.0 will use this profile to tailor the browsing experience to each individual. That means that if two different people each performed an Internet search with the same keywords using the same service, they'd receive different results determined by their individual profiles.

cable modem and Earth
©iStockphoto/ktsimage
Web 3.0 will likely plug into your individual tastes and browsing habits.­

The technologies and software required for this kind of application aren't yet mature. Services like TiVO and Pandora provide individualized content based on user input, but they both rely on a trial-and-error approach that isn't as efficient as what the experts say Web 3.0 will be. More importantly, both TiVO and Pandora have a limited scope -- television shows and music, respectively -- whereas Web 3.0 will involve all the information on the Internet.

Some experts believe that the foundation for Web 3.0 will be application programming interfaces (APIs). An API is an interface designed to allow developers to create applications that take advantage of a certain set of resources. Many Web 2.0 sites include APIs that give programmers access to the sites' unique data and capabilities. For example, Facebook's API allows developers to create programs that use Facebook as a staging ground for games, quizzes, product reviews and more.

One Web 2.0 trend that could help the development of Web 3.0 is the mashup. A mashup is the combination of two or more applications into a single application. For example, a developer might combine a program that lets users review restaurants with Google Maps. The new mashup application could show not only restaurant reviews, but also map them out so that the user could see the restaurants' locations. Some Internet experts believe that creating mashups will be so easy in Web 3.0 that anyone will be able to do it.

Other experts think that Web 3.0 will start fresh. Instead of using HTML as the basic coding language, it will rely on some new -- and unnamed -- language. These experts suggest it might be easier to start from scratch rather than try to change the current Web. However, this version of Web 3.0 is so theoretical that it's practically impossible to say how it will work.

The man responsible for the World Wide Web has his own theory of what the future of the Web will be. He calls it the Semantic Web, and many Internet experts borrow heavily from his work when talking about Web 3.0.

Making a Semantic Web

Tim Berners-Lee invented the World Wide Web in 1989. He created it as an interface for the Internet and a way for people to share information with one another. Berners-Lee disputes the existence of Web 2.0, calling it nothing more than meaningless jargon [source: Register]. Berners-Lee maintains that he intended the World Wide Web to do all the things that Web 2.0 is supposed to do.

Tim Berners-Lee
Catrina Genovese/Getty Images

Tim Berners-Lee, the inventor of the World Wide Web­

Berners-Lee's vision of the future Web is similar to the concept of Web 3.0. It's called the Semantic Web. Right now, the Web's structure is geared for humans. It's easy for us to visit a Web page and understand what it's all about. Computers can't do that. A search engine might be able to scan for keywords, but it can't understand how those keywords are used in the context of the page.

With the Semantic Web, computers will scan and interpret information on Web pages using software agents. These software agents will be programs that crawl through the Web, searching for relevant information. They'll be able to do that because the Semantic Web will have collections of information called ontologies. In terms of the Internet, an ontology is a file that defines the relationships among a group of terms. For example, the term "cousin" refers to the familial relationship between two people who share one set of grandparents. A Semantic Web ontology might define each familial role like this:

  • Grandparent: A direct ancestor two generations removed from the subject
  • Parent: A direct ancestor one generation removed from the subject
  • Brother or sister: Someone who shares the same parent as the subject
  • Nephew or niece: Child of the brother or sister of the subject
  • Aunt or uncle: Sister or brother to a parent of the subject
  • Cousin: child of an aunt or uncle of the subject

For the Semantic Web to be effective, ontologies have to be detailed and comprehensive. In Berners-Lee's concept, they would exist in the form of metadata. Metadata is information included in the code for Web pages that is invisible to humans, but readable by computers.

Constructing ontologies takes a lot of work. In fact, that's one of the big obstacles the Semantic Web faces. Will people be willing to put in the effort required to make comprehensive ontologies for their Web sites? Will they maintain them as the Web sites change? Critics suggest that the task of creating and maintaining such complex files is too much work for most people.

On the other hand, some people really enjoy labeling or tagging Web objects and information. Web tags categorize the tagged object or information. Several blogs include a tag option, making it easy to classify journal entries under specific topics. Photo sharing sites like Flickr allow users to tag pictures. Google even has turned it into a game: Google Image Labeler pits two people against each other in a labeling contest. Each player tries to create the largest number of relevant tags for a series of images. According to some experts, Web 3.0 will be able to search tags and labels and return the most relevant results back to the user. Perhaps Web 3.0 will combine Berners-Lee's concept of the Semantic Web with Web 2.0's tagging culture.

Even though Web 3.0 is more theory than reality, that hasn't stopped people from guessing what will come next.

Beyond Web 3.0

Whatever we call the next generation of the Web, what will come after it? Theories range from conservative predictions to guesses that sound more like science fiction films.

Paul Otellini at CES
David Paul Morris/Getty Images

Paul Otellini, CEO and President of Intel, discusses the increasing importance of mobile devices on the Web at the 2008 International Consumer Electronics Show.­

Here are just a few:

  • According to technology expert and entrepreneur Nova Spivack, the development of the Web moves in 10-year cycles. In the Web's first decade, most of the development focused on the back end, or infrastructure, of the Web. Programmers created the protocols and code languages we use to make Web pages. In the second decade, focus shifted to the front end and the era of Web 2.0 began. Now people use Web pages as platforms for other applications. They also create mashups and experiment with ways to make Web experiences more interactive. We're at the end of the Web 2.0 cycle now. The next cycle will be Web 3.0, and the focus will shift back to the back end. Programmers will refine the Internet's infrastructure to support the advanced capabilities of Web 3.0 browsers. Once that phase ends, we'll enter the era of Web 4.0. Focus will return to the front end, and we'll see thousands of new programs that use Web 3.0 as a foundation [source: Nova Spivack].
  • The Web will evolve into a three-dimensional environment. Rather than a Web 3.0, we'll see a Web 3D. Combining virtual reality elements with the persistent online worlds of massively multiplayer online roleplaying games (MMORPGs), the Web could become a digital landscape that incorporates the illusion of depth. You'd navigate the Web either from a first-person perspective or through a digital representation of yourself called an avatar (to learn more about an avatar's perspective, read How the Avatar Machine Works).
  • The Web will build on developments in distributed computing and lead to true artificial intelligence. In distributed computing, several computers tackle a large processing job. Each computer handles a small part of the overall task. Some people believe the Web will be able to think by distributing the workload across thousands of computers and referencing deep ontologies. The Web will become a giant brain capable of analyzing data and extrapolating new ideas based off of that information.
  • The Web will extend far beyond computers and cell phones. Everything from watches to television sets to clothing will connect to the Internet. Users will have a constant connection to the Web, and vice versa. Each user's software agent will learn more about its respective user by electronically observing his or her activities. This might lead to debates about the balance between individual privacy and the benefit of having a personalized Web browsing experience.
  • The Web will merge with other forms of entertainment until all distinctions between the forms of media are lost. Radio programs, television shows and feature films will rely on the Web as a delivery system.

It's too early to tell which (if any) of these future versions of the Web will come true. It may be that the real future of the Web is even more extravagant than the most extreme predictions. We can only hope that by the time the future of the Web gets here, we can all agree on what to call it.

Source : here

Jumat, 05 Maret 2010

How Web Operating Systems Work

As the Web evolves, people invent new words to describe its features and applications. Sometimes, a term gains widespread acceptance even if some people believe it's misleading or inaccurate. Such is the case with Web operating systems.

AstraNOS
2008 ©HowStuffWorks
The AstraNOS operating system login screen.­

An operating system (OS) is a special kind of program that organizes and controls computer hardware and software. Operating systems interact directly with computer hardware and serve as a platform for other applications. Whether it's Windows, Linux, Unix or Mac OS X, your computer depends on its OS to function.

That's why some people object to the term Web OS. A Web OS is a user interface (UI) that allows people to access applications stored completely or in part on the Web. It might mimic the user interface of traditional computer operating systems like Windows, but it doesn't interact directly with the computer's hardware. The user must still have a traditional OS on his or her computer.

While there aren't many computer operating systems to choose from, the same can't be said of Web operating systems. There are dozens of Web operating systems available. Some of them offer a wide range of services, while others are still in development and only provide limited functionality. In some cases, there may be a single ambitious programmer behind the project. Other Web operating systems are the product of a large team effort. Some are free to download, and others charge a fee. Web operating systems can come in all shapes and sizes.

What do Web operating systems do?

Web operating systems are interfaces to distributed computing systems, particularly cloud or utility computing systems. In these systems, a company provides computer services to users through an Internet connection. The provider runs a system of computers that include application servers and databases.

With some systems, people access the applications using Web browsers like Firefox or Internet Explorer. With other systems, users must download a program that creates a system-specific client. A client is software that accesses information or services from other software. In either case, users access programs that are stored not on their own computers, but on the Web.

What sort of services do they provide? Web operating systems can give users access to practically any program they could run on a computer's desktop. Common applications include:

  • Calendars
  • E-mail
  • File management
  • Games
  • Instant messaging programs
  • Photo, video and audio editing programs
  • RSS readers
  • Spreadsheet programs
  • Word processing programs

With traditional computer operating systems, you'd have to install applications to your own computer. The applications would exist on your computer's hard disk drive. They would run by accessing the processing power of your computer's central processing unit (CPU) by sending electronic requests to your computer's OS.

Web operating systems can't replace your computer's native OS -- in fact, they depend on traditional computer operating systems to work. The user side of Web OS software, whether it's a Web browser or a system-specific client, runs on top of your computer's OS. But programmers design Web operating systems to look and act like a desktop OS. A Web OS might look a lot like a traditional OS, but it doesn't manage your computer's hardware or software.

iGoogle
©2008 HowStuffWorks­
Portals like iGoogle aren't true operating systems, but they do pull information from other Web pages into a centralized site.

A Web OS allows you to access applications stored not on your computer, but on the Web. The applications exist wholly or in part on Web servers within a particular provider network. When you save information in an application, you might not store it on your computer. Instead, you save the information to databases connected to the Internet. Some Web operating systems also give you the option to save information to your local hard disk drive.

Because Web operating systems aren't tied to a specific computer or device, you can access Web applications and data from any device connected to the Internet. That is, you can do it as long as the device can run the Web operating software (whether that's a particular Web browser or client). This means that you can access the Web OS on one computer, create a document, save the work and then access it again later using a completely different machine. Web operating systems offer users the benefit of accessibility -- data isn't tied down to your computer.

The Technology of Web Operating Systems


With so many different Web operating systems either currently available or in development, it should come as no surprise that programmers use different approaches to achieve the same effect. While the goal of a Web OS is to provide an experience similar to using a desktop OS, there are no hard and fast rules for how to make that happen. The two most popular approaches rely on Flash technologies or Asynchronous JavaScript and XML (AJAX) technologies.

Flash is a set of technologies that enable programmers to create interactive Web pages. It's a technology that uses vector graphics. Vector graphics record image data as a collection of shapes and lines rather than individual pixels, which allows computers to load Flash images and animation faster than pixel-based graphics.

Flash files stream over the Internet, which means the end user accessing the file doesn't have to wait for the entire file to download to his or her computer before accessing parts of it. With Flash-based programs like YouTube's video player, this means you can start watching a film clip without having to download it first.

More than 98 percent of all computers connected to the Internet have a Flash player installed [source: Adobe]. That makes Flash an attractive approach for many programmers. They can create a Web OS knowing that the vast majority of computer users will be able to access it without having to download additional software.

AJAX technologies rely on hypertext markup language (HTML), the JavaScript programming language, Cascading Style Sheets (CSS) and eXtensible Markup Language (XML). It's a browser technology. The HTML language is a collection of markup tags programmers use on text files that tell Web browsers how to display the text file as a Web page. CSS is a tool that gives programmers more options when tweaking a Web site's appearance. Programmers can create a style sheet with certain attributes such as font style and color, and then apply those styles across several Web pages at once. JavaScript is a programming language that allows applications to send information back and forth between servers and browsers. XML is a markup language, which means programmers use it to describe the structure of information within a file and how it relates to other information.

The "asynchronous" aspect of AJAX means that AJAX applications transfer data between servers and browsers in small bits of information as needed. The alternative is to send an entire Web page to the browser every time something changes, which would significantly slow down the user's experience. With sufficient skill and knowledge, a programmer can create an AJAX application with the same functions as a desktop application.

Like Flash, most computers can run AJAX applications. That's because AJAX isn't a new programming language but rather a way to use established Web standards to create new applications. As long as an application programmer includes the right information in an application's code, it should run fine on any major Web browser. Some well known Web applications based on AJAX include Google Calendar and Gmail.

Why Use a Web OS?


Web operating systems simplify a user's experience when accessing applications hosted on remote servers. Ideally, a Web OS behaves like a desktop OS. The more familiar and intuitive the system, the faster people will learn how to use it. When a person chooses to run a certain application, his or her computer sends a request to the system's control node -- a special server that acts as a system administrator. The control node interprets the request and connects the user's client to the appropriate application server or database. By offloading applications, storage and processing power to a remote network, users don't have to worry about upgrading computer systems every few years.
YouOS
©2008 HowStuffWorks
YouOS is one of the more popular Web operating systems on the Internet.­

For many people, that's the most attractive feature of Web operating systems. As long as their computers can run the browser or client software necessary to access the system, there's no need to upgrade. Some people become frustrated when they have to purchase new computers in order to run current software. With distributed computing, it's the provider's responsibility to provide application functionality. If the provider isn't able to meet user demands, users might look elsewhere for services.

Web operating systems can also make it easier to share data between computers. Perhaps you own both a Mac computer and a PC. It can be challenging to share data between the two different computers. Even if you use file formats that are compatible with both Mac computers and PCs, you could end up with a copy of the same file on each machine. Changing one copy isn't reflected on the other computer's copy. Web operating systems provide an interface where you can use any computer to create, modify and access a single copy of a file saved on a remote database. As long as the Web OS you're using can cross platforms, meaning it works on both Macs and PCs, you'll be able to work on the file at any time using either of your computers.

Likewise, Web operating systems can simplify collaborative projects. Many Web operating systems allow users to share files. Each user can work from the file saved to the system's native network. For many users, this is an attractive alternative to organizing multiple copies of the same file and then incorporating everyone's changes into a new version.

Source : here

Byte Prefixes and Binary Math

When you start talking about lots of bytes, you get into prefixes like kilo, mega and giga, as in kilobyte, megabyte and gigabyte (also shortened to K, M and G, as in Kbytes, Mbytes and Gbytes or KB, MB and GB). The following table shows the binary multipliers:

Name
Abbr.
Size
Kilo
K
2^10 = 1,024
Mega
M
2^20 = 1,048,576
Giga
G
2^30 = 1,073,741,824
Tera
T
2^40 = 1,099,511,627,776
Peta
P
2^50 = 1,125,899,906,842,624
Exa
E
2^60 = 1,152,921,504,606,846,976
Zetta
Z
2^70 = 1,180,591,620,717,411,303,424
Yotta
Y
2^80 = 1,208,925,819,614,629,174,706,176


You can see in this chart that kilo is about a thousand, mega is about a million, giga is about a billion, and so on. So when someone says, "This computer has a 2 gig hard drive," what he or she means is that the hard drive stores 2 gigabytes, or approximately 2 billion bytes, or exactly 2,147,483,648 bytes. How could you possibly need 2 gigabytes of space? When you consider that one CD holds 650 megabytes, you can see that just three CDs worth of data will fill the whole thing! Terabyte databases are fairly common these days, and there are probably a few petabyte databases floating around the Pentagon by now.

Binary math works just like decimal math, except that the value of each bit can be only 0 or 1. To get a feel for binary math, let's start with decimal addition and see how it works. Assume that we want to add 452 and 751:

  452
+ 751
---
1203


To add these two numbers together, you start at the right: 2 + 1 = 3. No problem. Next, 5 + 5 = 10, so you save the zero and carry the 1 over to the next place. Next, 4 + 7 + 1 (because of the carry) = 12, so you save the 2 and carry the 1. Finally, 0 + 0 + 1 = 1. So the answer is 1203.

Binary addition works exactly the same way:

  010
+ 111
---
1001

Starting at the right, 0 + 1 = 1 for the first digit. No carrying there. You've got 1 + 1 = 10 for the second digit, so save the 0 and carry the 1. For the third digit, 0 + 1 + 1 = 10, so save the zero and carry the 1. For the last digit, 0 + 0 + 1 = 1. So the answer is 1001. If you translate everything over to decimal you can see it is correct: 2 + 7 = 9.

To sum up, here's what we've learned about bits and bytes:

  • Bits are binary digits. A bit can hold the value 0 or 1.
  • Bytes are made up of 8 bits each.
  • Binary math works just like decimal math, but each bit can have a value of only 0 or 1.
Source : here

The Standard ASCII Character Set

Bytes are frequently used to hold individual characters in a text document. In the ASCII character set, each binary value between 0 and 127 is given a specific character. Most computers extend the ASCII character set to use the full range of 256 characters available in a byte. The upper 128 characters handle special things like accented characters from common foreign languages.

You can see the 127 standard ASCII codes below. Computers store text documents, both on disk and in memory, using these codes. For example, if you use Notepad in Windows 95/98 to create a text file containing the words, "Four score and seven years ago," Notepad would use 1 byte of memory per character (including 1 byte for each space character between the words -- ASCII character 32). When Notepad stores the sentence in a file on disk, the file will also contain 1 byte per character and per space.

Try this experiment: Open up a new file in Notepad and insert the sentence, "Four score and seven years ago" in it. Save the file to disk under the name getty.txt. Then use the explorer and look at the size of the file. You will find that the file has a size of 30 bytes on disk: 1 byte for each character. If you add another word to the end of the sentence and re-save it, the file size will jump to the appropriate number of bytes. Each character consumes a byte.

If you were to look at the file as a computer looks at it, you would find that each byte contains not a letter but a number -- the number is the ASCII code corresponding to the character (see below). So on disk, the numbers for the file look like this:

     F   o   u   r     a   n   d      s   e   v   e   n
    70 111 117 114 32 97 110 100 32 115 101 118 101 110

By looking in the ASCII table, you can see a one-to-one correspondence between each character and the ASCII code used. Note the use of 32 for a space -- 32 is the ASCII code for a space. We could expand these decimal numbers out to binary numbers (so 32 = 00100000) if we wanted to be technically correct -- that is how the computer really deals with things.

The first 32 values (0 through 31) are codes for things like carriage return and line feed. The space character is the 33rd value, followed by punctuation, digits, uppercase characters and lowercase characters. To see all 127 values, check out Unicode.org's chart.

We'll learn about byte prefixes and binary math next.

Source : here

The Base-2 System and the 8-bit Byte

The reason computers use the base-2 system is because it makes it a lot easier to implement them with current electronic technology. You could wire up and build computers that operate in base-10, but they would be fiendishly expensive right now. On the other hand, base-2 computers are relatively cheap.

So computers use binary numbers, and therefore use binary digits in place of decimal digits. The word bit is a shortening of the words "Binary digIT." Whereas decimal digits have 10 possible values ranging from 0 to 9, bits have only two possible values: 0 and 1. Therefore, a binary number is composed of only 0s and 1s, like this: 1011. How do you figure out what the value of the binary number 1011 is? You do it in the same way we did it above for 6357, but you use a base of 2 instead of a base of 10. So:

(1 * 2^3) + (0 * 2^2) + (1 * 2^1) + (1 * 2^0) = 8 + 0 + 2 + 1 = 11

You can see that in binary numbers, each bit holds the value of increasing powers of 2. That makes counting in binary pretty easy. Starting at zero and going through 20, counting in decimal and binary looks like this:

0 =     0
1 = 1
2 = 10
3 = 11
4 = 100
5 = 101
6 = 110
7 = 111
8 = 1000
9 = 1001
10 = 1010
11 = 1011
12 = 1100
13 = 1101
14 = 1110
15 = 1111
16 = 10000
17 = 10001
18 = 10010
19 = 10011
20 = 10100

When you look at this sequence, 0 and 1 are the same for decimal and binary number systems. At the number 2, you see carrying first take place in the binary system. If a bit is 1, and you add 1 to it, the bit becomes 0 and the next bit becomes 1. In the transition from 15 to 16 this effect rolls over through 4 bits, turning 1111 into 10000.

Bits are rarely seen alone in computers. They are almost always bundled together into 8-bit collections, and these collections are called bytes. Why are there 8 bits in a byte? A similar question is, "Why are there 12 eggs in a dozen?" The 8-bit byte is something that people settled on through trial and error over the past 50 years.

With 8 bits in a byte, you can represent 256 values ranging from 0 to 255, as shown here:

 0 = 00000000
1 = 00000001
2 = 00000010
...
254 = 11111110
255 = 11111111

In the article How CDs Work, you learn that a CD uses 2 bytes, or 16 bits, per sample. That gives each sample a range from 0 to 65,535, like this:
    0 = 0000000000000000
1 = 0000000000000001
2 = 0000000000000010
...
65534 = 1111111111111110
65535 = 1111111111111111

Next, we'll look at one way that bytes are used.

Source : here

Introduction to How Bits and Bytes Work

If you have used a computer for more than five minutes, then you have heard the words bits and bytes. Both RAM and hard disk capacities are measured in bytes, as are file sizes when you examine them in a file viewer.

You might hear an advertisement that says, "This computer has a 32-bit Pentium processor with 64 megabytes of RAM and 2.1 gigabytes of hard disk space." And many HowStuffWorks articles talk about bytes (for example, How CDs Work). In this article, we will discuss bits and bytes so that you have a complete understanding.

Decimal Numbers
The easiest way to understand bits is to compare them to something you know: digits. A digit is a single place that can hold numerical values between 0 and 9. Digits are normally combined together in groups to create larger numbers. For example, 6,357 has four digits. It is understood that in the number 6,357, the 7 is filling the "1s place," while the 5 is filling the 10s place, the 3 is filling the 100s place and the 6 is filling the 1,000s place. So you could express things this way if you wanted to be explicit:

(6 * 1000) + (3 * 100) + (5 * 10) + (7 * 1) = 6000 + 300 + 50 + 7 = 6357

Another way to express it would be to use powers of 10. Assuming that we are going to represent the concept of "raised to the power of" with the "^" symbol (so "10 squared" is written as "10^2"), another way to express it is like this:

(6 * 10^3) + (3 * 10^2) + (5 * 10^1) + (7 * 10^0) = 6000 + 300 + 50 + 7 = 6357

What you can see from this expression is that each digit is a placeholder for the next higher power of 10, starting in the first digit with 10 raised to the power of zero.

­That should all feel pretty comfortable -- we work with decimal digits every day. The neat thing about number systems is that there is nothing that forces you to have 10 different values in a digit. Our base-10 number system likely grew up because we have 10 fingers, but if we happened to evolve to have eight fingers instead, we would probably have a base-8 number system. You can have base-anything number systems. In fact, there are lots of good reasons to use different bases in different situations.

Computers happen to operate using the base-2 number system, also known as the binary number system (just like the base-10 number system is known as the decimal number system). Find out why and how that works in the next section.

Source : here

Variables and Printf

Variables

As a programmer, you will frequently want your program to "remember" a value. For example, if your program requests a value from the user, or if it calculates a value, you will want to remember it somewhere so you can use it later. The way your program remembers things is by using variables. For example:

    int b;

This line says, "I want to create a space called b that is able to hold one integer value." A variable has a name (in this case, b) and a type (in this case, int, an integer). You can store a value in b by saying something like:

    b = 5;

You can use the value in b by saying something like:

    printf("%d", b);

In C, there are several standard types for variables:

  • int - integer (whole number) values
  • float - floating point values
  • char - single character values (such as "m" or "Z")

Printf

The printf statement allows you to send output to standard out. For us, standard out is generally the screen (although you can redirect standard out into a text file or another command).

Here is another program that will help you learn more about printf:

#include 

int main()
{
int a, b, c;
a = 5;
b = 7;
c = a + b;
printf("%d + %d = %d\n", a, b, c);
return 0;
}

Type this program into a file and save it as add.c. Compile it with the line gcc add.c -o add and then run it by typing add (or ./add). You will see the line "5 + 7 = 12" as output.

Here is an explanation of the different lines in this program:

  • The line int a, b, c; declares three integer variables named a, b and c. Integer variables hold whole numbers.

  • The next line initializes the variable named a to the value 5.

  • The next line sets b to 7.

  • The next line adds a and b and "assigns" the result to c.

    The computer adds the value in a (5) to the value in b (7) to form the result 12, and then places that new value (12) into the variable c. The variable c is assigned the value 12. For this reason, the = in this line is called "the assignment operator."

  • The printf statement then prints the line "5 + 7 = 12." The %d placeholders in the printf statement act as placeholders for values. There are three %d placeholders, and at the end of the printf line there are the three variable names: a, b and c. C matches up the first %d with a and substitutes 5 there. It matches the second %d with b and substitutes 7. It matches the third %d with c and substitutes 12. Then it prints the completed line to the screen: 5 + 7 = 12. The +, the = and the spacing are a part of the format line and get embedded automatically between the %d operators as specified by the programmer.
Source :here

The Simplest C Program

Let's start with the simplest possible C program and use it both to understand the basics of C and the C compilation process. Type the following program into a standard text editor (vi or emacs on UNIX, Notepad on Windows or TeachText on a Macintosh). Then save the program to a file named samp.c. If you leave off .c, you will probably get some sort of error when you compile it, so make sure you remember the .c. Also, make sure that your editor does not automatically append some extra characters (such as .txt) to the name of the file. Here's the first program:

#include 

int main()
{
printf("This is output from my first program!\n");
return 0;
}

When executed, this program instructs the computer to print out the line "This is output from my first program!" -- then the program quits. You can't get much simpler than that!

To compile this code, take the following steps:

  • On a UNIX machine, type gcc samp.c -o samp (if gcc does not work, try cc). This line invokes the C compiler called gcc, asks it to compile samp.c and asks it to place the executable file it creates under the name samp. To run the program, type samp (or, on some UNIX machines, ./samp).
  • On a DOS or Windows machine using DJGPP, at an MS-DOS prompt type gcc samp.c -o samp.exe. This line invokes the C compiler called gcc, asks it to compile samp.c and asks it to place the executable file it creates under the name samp.exe. To run the program, type samp.
  • If you are working with some other compiler or development system, read and follow the directions for the compiler you are using to compile and execute the program.

You should see the output "This is output from my first program!" when you run the program. Here is what happened when you compiled the program:

If you mistype the program, it either will not compile or it will not run. If the program does not compile or does not run correctly, edit it again and see where you went wrong in your typing. Fix the error and try again.

Note: Position

When you enter this program, position #include so that the pound sign is in column 1 (the far left side). Otherwise, the spacing and indentation can be any way you like it. On some UNIX systems, you will find a program called cb, the C Beautifier, which will format code for you. The spacing and indentation shown above is a good example to follow.

Let's walk through this program and start to see what the different lines are doing :

  • This C program starts with #include . This line includes the "standard I/O library" into your program. The standard I/O library lets you read input from the keyboard (called "standard in"), write output to the screen (called "standard out"), process text files stored on the disk, and so on. It is an extremely useful library. C has a large number of standard libraries like stdio, including string, time and math libraries. A library is simply a package of code that someone else has written to make your life easier (we'll discuss libraries a bit later).
  • The line int main() declares the main function. Every C program must have a function named main somewhere in the code. We will learn more about functions shortly. At run time, program execution starts at the first line of the main function.
  • In C, the { and } symbols mark the beginning and end of a block of code. In this case, the block of code making up the main function contains two lines.
  • The printf statement in C allows you to send output to standard out (for us, the screen). The portion in quotes is called the format string and describes how the data is to be formatted when printed. The format string can contain string literals such as "This is output from my first program!," symbols for carriage returns (\n), and operators as placeholders for variables (see below). If you are using UNIX, you can type man 3 printf to get complete documentation for the printf function. If not, see the documentation included with your compiler for details about the printf function.
  • The return 0; line causes the function to return an error code of 0 (no error) to the shell that started execution. More on this capability a bit later.
source : here

Kamis, 04 Maret 2010

What is C?

C is a computer programming language. That means that you can use C to create lists of instructions for a computer to follow. C is one of thousands of programming languages currently in use. C has been around for several decades and has won widespread acceptance because it gives programmers maximum control and efficiency. C is an easy language to learn. It is a bit more cryptic in its style than some other languages, but you get beyond that fairly quickly.

C is what is called a compiled language. This means that once you write your C program, you must run it through a C compiler to turn your program into an executable that the computer can run (execute). The C program is the human-readable form, while the executable that comes out of the compiler is the machine-readable and executable form. What this means is that to write and run a C program, you must have access to a C compiler. If you are using a UNIX machine (for example, if you are writing CGI scripts in C on your host's UNIX computer, or if you are a student working on a lab's UNIX machine), the C compiler is available for free. It is called either "cc" or "gcc" and is available on the command line. If you are a student, then the school will likely provide you with a compiler -- find out what the school is using and learn about it. If you are working at home on a Windows machine, you are going to need to download a free C compiler or purchase a commercial compiler. A widely used commercial compiler is Microsoft's Visual C++ environment (it compiles both C and C++ programs). Unfortunately, this program costs several hundred dollars. If you do not have hundreds of dollars to spend on a commercial compiler, then you can use one of the free compilers available on the Web. See http://delorie.com/djgpp/ as a starting point in your search.

We will start at the beginning with an extremely simple C program and build up from there. I will assume that you are using the UNIX command line and gcc as your environment for these examples; if you are not, all of the code will still work fine -- you will simply need to understand and use whatever compiler you have available.

Source : here

Selasa, 02 Maret 2010

Stream Control Transmission Protocol

SCTP is a reliable, general-purpose transport layer protocol for use on IP networks. While the protocol was originally designed for telephony signaling (under the RFC 2960), SCTP provided an added bonus -- it solved some of the limitations of TCP while borrowing beneficial features of UDP. SCTP provides features for high availability, increased reliability, and improved security for socket initiation. (Figure 1 shows the layered architecture of the IP stack.)


Figure 1. Layered architecture of the IP stack
Layered architecture of the IP stack

This article introduces the concept of SCTP in the Linux 2.6 kernel, highlights some of the advanced features (such as multi-homing and -streaming), and provides server and client source code snippets (with a URL to more code) to demonstrate the protocol's ability to deliver multi-streaming.

Let's start with an overview of the IP stack.

The IP stack

The Internet protocol suite is split into several layers; each layer provides specific functionality as shown in Figure 1.

Starting from the bottom:

  • The link layer provides the physical interface to the communication medium (such as an Ethernet device).
  • The network layer manages the movement of packets in a network, specifically making sure packets get to their destination (also called routing).
  • The transport layer regulates the flow of packets between two hosts for the application layer. It also presents the application endpoint for communication, known as a port.
  • Finally, the application layer provides meaning to the data transported through the socket. This data could consist of e-mail messages using the Simple Mail Transport Protocol (SMTP) or Web pages rendered through the Hypertext Transport Protocol (HTTP).

All application layer protocols use the sockets layer as their interface to the transport layer protocol. The Sockets API was developed at UC Berkeley within the BSD UNIX® operating system.

Now for a quick refresher on traditional transport layer protocols before we dive into the workings of SCTP.

The transport layer protocols

The two most popular transport layer protocols are the transmission control protocol (TCP) and the user datagram protocol (UDP):

  • TCP is a reliable protocol that guarantees sequenced, ordered delivery of data and manages congestion within a network.
  • UDP is a message-oriented protocol that neither guarantees ordering of delivery nor manages congestion.

However, UDP is a fast protocol that preserves the boundaries of the messages it transports.

This article presents another option: SCTP. It provides the reliable, ordered delivery of data like TCP but operates in the message-oriented fashion like UDP, preserving message boundaries. SCTP also provides several advanced features:

  • Multi-homing
  • Multi-streaming
  • Initiation protection
  • Message framing
  • Configurable unordered delivery
  • Graceful shutdown

Key features of SCTP

The two most important enhancements in SCTP over traditional transport layer protocols are the end-host multi-homing and multi-streaming capabilities.

Multi-homing

Multi-homing provides applications with higher availability than those that use TCP. A multi-homed host is one that has more than one network interface and therefore more than one IP address for which it can be addressed. In TCP, a connection refers to a channel between two endpoints (in this case, a socket between the interfaces of two hosts). SCTP introduces the concept of an association that exists between two hosts but can potentially collaborate with multiple interfaces at each host.

Figure 2 illustrates the difference between a TCP connection and an SCTP association.


Figure 2. TCP connection vs. an SCTP association
TCP connection vs. an SCTP association

At the top is a TCP connection. Each host includes a single network interface; a connection is created between a single interface on each of the client and server. Upon establishment, the connection is bound to each interface.

At the bottom of the figure, you can see an architecture that includes two network interfaces per host. Two paths are provided through the independent networks, one from interface C0 to S0 and another from C1 to S1. In SCTP, these two paths would be collected into an association.

SCTP monitors the paths of the association using a built-in heartbeat; upon detecting a path failure, the protocol sends traffic over the alternate path. It's not even necessary for the applications to know that a failover recovery occurred.

Failover can also be used to maintain network application connectivity. For example, consider a laptop that includes a wireless 802.11 interface and an Ethernet interface. When the laptop is in its docking station, the higher-speed Ethernet interface would be preferred (in SCTP, called the primary address); but upon loss of this connection (removal from the docking station), connections would be failed over to the wireless interface. Upon return to the docking station, the Ethernet connection would be detected and communication resumed over this interface. This is a powerful mechanism for providing high availability and increased reliability.

Multi-streaming

In some ways, an SCTP association is like a TCP connection except that SCTP supports multiple streams within an association. All the streams within an association are independent but related to the association (see Figure 3).


Figure 3. Relationship of an SCTP association to streams
Relationship of an SCTP association to streams

Each stream is given a stream number that is encoded inside SCTP packets flowing through the association. Multi-streaming is important because a blocked stream (for example, one awaiting re-transmission resulting from the loss of a packet) does not affect the other streams in an association. This problem is commonly referred to as head-of-line blocking. TCP is prone to such blocking.

How can multiple streams provide better responsiveness in transporting data? For example, the HTTP protocol shares control and data over the same socket. A Web client requests a file from a server, and the server sends the file back over the same connection. A multi-streamed HTTP server would provide better interactivity because multiple requests could be serviced on independent streams within the association. This functionality would parallelize the responses, and while not potentially faster, would simultaneously load the HTML and graphics images, providing the perception of better responsiveness.

Multi-streaming is an important feature of SCTP, especially when you consider some of the control and data issues in protocol design. In TCP, control and data typically share the same connection, which can be problematic because control packets can be delayed behind data packets. If control and data were split into independent streams, control data could be dealt with in a more timely manner, resulting in better utilization of available resources.

Initiation protection

Initiating a new connection in TCP and SCTP occurs with a packet handshake. In TCP, it's a called a three-way handshake. The client sends a SYN packet (short for Synchronize) for which the server responds with a SYN-ACK packet (Synchronize-Acknowledge). Finally, the client confirms receipt with an ACK packet (see Figure 4).


Figure 4. The packet exchanges for the TCP and STCP handshake
The packet exchanges for the TCP and SCTP handshake

The problem that can occur with TCP is when a rogue client forges an IP packet with a bogus source address, then floods a server with TCP SYN packets. The server allocates resources for the connections upon receipt of the SYN, then under a flood of SYN packets, eventually runs out and is unable to service new requests. This is called a Denial of Service (DoS) attack.

SCTP protects against this type of attack through a four-way handshake and the introduction of a cookie. In SCTP, a client initiates a connection with an INIT packet. The server responds with an INIT-ACK, which includes the cookie (a unique context identifying this proposed connection). The client then responds with a COOKIE-ECHO, which contains the cookie sent by the server. At this point, the server allocates the resource for the connection and acknowledges this by sending a COOKIE-ACK to the client.

To solve the problem of delayed data movement with the four-way handshake, SCTP permits data to be included in the COOKIE-ECHO and COOKIE-ACK packets.

Message framing

With message framing, the boundaries in which messages are communicated through a socket are preserved; this means that if a client sends 100 bytes to a server followed by 50 bytes, the server will read 100 bytes and 50 bytes, respectively, for two reads. UDP also operates in this way, which makes it advantageous for message-oriented protocols.

In contrast, TCP operates in a byte-stream fashion. Without framing, a peer may receive more or less than was sent (splitting up a write or aggregating multiple writes into a single read). This behavior requires that message-oriented protocols operating over TCP provide data-buffer and message framing within their application layer (a potentially complex task).

SCTP provides for message framing in data transfer. When a peer performs a write on a socket, it is guaranteed that this same-sized chunk of data will be read at the peer endpoint (see Figure 5).


Figure 5. Message framing in UDP/SCTP vs. a byte-stream-oriented protocol
Message framing in UDP/SCTP vs. a byte-stream-oriented protocol

For stream-oriented data, such as audio or video data, lack of framing is acceptable.

Configurable unordered delivery

Messages in SCTP are transferred reliably but not necessarily in the desired order. TCP guarantees that data is delivered in order (which is a good thing, considering TCP is a stream protocol). UDP guarantees no ordering. But, you can also configure streams within SCTP to accept unordered messages if desired.

This feature can be useful in message-oriented protocols in which requests are independent and ordering is not important. Further, you can configure unordered delivery on a stream-by-stream basis within an association.

Graceful shutdown

TCP and SCTP are connection-based protocols, while UDP is a connection-less protocol. Both TCP and SCTP require connection setup and teardown between peers. What's different about socket shutdown in SCTP is the removal of TCP's half-close.

Figure 6 shows the shutdown sequences for TCP and SCTP.


Figure 6. TCP and SCTP connection termination sequences
TCP and SCTP connection termination sequences

In TCP, it's possible for a peer to close its end of a socket (resulting in a FIN packet being sent) but then to continue to receive data. The FIN indicates that no more data is to be sent by this endpoint, but until the peer closes its end of the socket, it may continue to transmit data. Applications rarely use this half-closed state, and therefore the SCTP designers opted to remove it and replace it with a cleaner termination sequence. When a peer closes its socket (resulting in the issuance of a SHUTDOWN primitive), both endpoints are required to close, and no further data movement is permitted in either direction.

Source : here