Tuesday, March 23, 2010

Looking into future

During this age of rapid technological and social change, it is important for a profession to gaze into the future. Alvin Toffler has said that “as we advance into the terra incognita of tomorrow, it is better to have a general and incomplete map, subject to revision and correction, than to have no map at all." To put it another way, we cannot know the future, but we see trends happening now that can be projected forward. We only have to gaze five to ten years into the future to know that the profession of orientation and mobility as it is now understood will either be gone, or it will fundamentally change. These may seem like outrageous comments to those not following the communications and biotechnology revolutions. But the changes are here and now, and they will have profound impact on our professional lives.
In 1965, Gordon Moore, co-founder of chip maker Intel, put forth an axiom that became known as Moore's Law. Moore stated that every year since 1959 the number of components on a microchip had doubled. The law is now treated as a summary statement that every 18 months a new chip goes on the market that is twice as fast as its predecessor, has twice the memory, is cheaper, and more compact. The complexity of the software that takes advantage of the periodic doubling of computer chips also follows Moore's Law. Microsoft's word processing program had 27,000 lines of code when it was first released. By 1995 Microsoft Word had two million lines of code. The next version will have four million, then eight, sixteen, doubling every 12 to 18 months. The internet is doubling in size every year. World Wide Web pages are doubling every 50 days. The power of a computer network has been defined as the square of the number of users on the network. This means that the power of the World Wide Web is quadrupling every 50 days.
Raymond Kurzweil, inventor of the Kurzweil Reader (and numerous inventions for disabled people) told his audiences that the doubling power of computers had reached a critical threshold. Dr. Kurzweil tells the story of the Chinese Emperor who was so pleased with the game of chess that he granted the inventor any wish. The creator of chess asked the Emperor for one humble grain of rice to be placed on the first square of a chess board, two meagre grains of rice on the second square, four tiny grains on the third square, eight grains lined up neatly on the next square, and so forth until all 64 squares of the board were accounted for. The Emperor thought the inventor a humble man for asking so little, yet by the 32nd square the Emperor owed the inventor eight billion grains of rice, enough to cover a one acre field. Dramatic things happen on the second half of the chess board, from square 33 to the 64th square (at which point the inventor controls all the rice on the planet). According to Dr. Kurzweil, in 1995, we reached the 32nd doubling of computer power. Dramatic changes are underfoot. Computers are getting ready to listen, understand, translate languages in real time, and respond instantly with voice, video, animation, graphics, and text. Soon computers will immerse us in virtual worlds so strange and unusual we cannot yet imagine their composition. The reason these changes are no longer science fiction is because we now have (or will soon have) the computing power to make them happen. We crossed the threshold into the future. We are standing on the second half of the chess board. He tells that Moore's Law has passed the 32nd position of the chess board. Machine vision is not science fiction. It is about to happen. Now there are one billion transistors on a chip. This magnitude of power will make practical machine vision possible shortly after the turn of the century.
Although there is some development, the necessity of the society is not going to be fulfilled. And the necessity is the cause of invention. Now there are many researches undergoing to transform to nanotechnology from silicon fabrication which will take us to new world.
According to “10 ideas for next 10 years” article published in Times Magazine recently, Bandwidth will be the new black gold. Here black gold symbolize that there will be scarcity of the bandwidth. Some of the cause is already seen in America. The article tells us the unexpected, budget-breaking mobile-phone bill. Most aren't as bad as the $22,000 bill a California man received from Verizon Wireless for his teenager's Internet usage, or the New York family whose iPhones racked up nearly $4,800 by automatically checking for e-mails on a Mediterranean cruise. This shows us the growing internet appetite in America. In the U.S. in 2010, a family can easily spend hundreds of dollars a month on cable, mobile phones and Internet and telephone services. Some families already spend at least as much on bandwidth as they do on energy. Due to this there is scarcity of bandwidth in America and hence the price levels are increased for bandwidth control. Still there is no sign of decrease in use of bandwidth. It is not unlikely that the American appetite for bandwidth will diminish anytime soon, nor is it even clear that we want it to. But if we want the pleasure and convenience of a high-bandwidth society, someone will need to figure out a solution to the bandwidth dilemma soon. As already mentioned necessity is the cause of invention and observing the present trend of development in IT sector we can predict that there will be new invention in this field.

Sunday, March 21, 2010

USB 2.0

Universal Serial Bus (USB)

Universal Serial Bus (USB) 2.0 is an external serial interface used on computers and other digital devices to transfer data using a USB cable. The designation “2.0” refers to the standard or version of the USB interface. As of fall 2006, USB 2.0 remains the current standard.

USB is a plug-and-play interface. This means that the computer does not need to be powered off in order to plug in or unplug a USB 2.0 component. For example, an iPod or other MP3 player can be connected to a computer via a USB cable running to the USB 2.0 port. The computer will register the device as another storage area and show any files it contains.

Using the USB 2.0 interface, one can transfer files to or from the MP3 player. When finished, simply unplug the USB cable from the interface. Because the computer does not need to be shut down to plug in the device, USB components are considered “hot swappable.”

Aside from MP3 players, many other external devices use USB 2.0 data ports, including digital cameras, cell phones, and newer cable boxes. Native components also make use of USB, such as mice, keyboards, external hard drive enclosures, printers, scanners, fax machines, wireless and wired networks keys, and WiFi scanners. One of the most popular and convenient USB gadgets is a memory stick.

When USB standards change from an existing version to a newer version, as they did from USB 1.1 to USB 2.0, hardware made for the newer version is in most cases backwards-compatible. For instance, if a computer has a USB 1.1 port, a device made for USB 2.0 that is marked as “backwards compatible to USB 1.1” will work on the older port. However, the device will only transfer data at 1.1 speeds using a USB 1.1 port.

Currently, computers are built with USB 2.0 ports. The USB 2.0 standard encompasses three data transfer rates:

* Low Speed: 1.5 megabits per second, used mostly for keyboards and mice.
* Full Speed: 12 megabits per second, the USB 1.1 standard rate.
* Hi Speed: 480 megabits per second, the USB 2.0 standard rate.

Since USB 2.0 supports all three data rates, a device that is marked as “USB 2.0 compliant” isn’t necessarily hi-speed. It may operate through a USB 2.0 port at one of the slower speeds. Look for clarification when shopping for hi-speed USB 2.0 devices.

Grid and Cloud Computing

Grid computing

The idea of grid computing originated with Ian Foster, Carl Kesselman and Steve Tuecke. They got together to develop a toolkit to handle computation management, data movement, storage management and other infrastructure that could handle large grids without restricting themselves to specific hardware and requirements.

Grid computing is the act of sharing tasks over multiple computers. Tasks can range from data storage to complex calculations and can be spread over large geographical distances. In some cases, computers within a grid are used normally and only act as part of the grid when they are not in use. These grids scavenge unused cycles on any computer that they can access, to complete. These computers join together to create a virtual supercomputer. Networked computers can work on the same problems, traditionally reserved for supercomputers, and yet these networks of computers are more powerful than the super computers built in the seventies and eighties. Modern supercomputers are built on the principles of grid computing, incorporating many smaller computers into a larger whole. The technique is also exceptionally flexible.

Grid computing techniques can be used to create very different types of grids, adding flexibility as well as power by using the resources of multiple machines. An equipment grid will use a grid to control a piece of equipment, such as a telescope, as well as analyze the data that equipment collects. A data grid, however, will primarily manage large amounts of information, allowing users to share access.

Grid computing is similar to cluster computing, but there are a number of distinct differences. In a grid, there is no centralized management; computers in the grid are independently controlled, and can perform tasks unrelated to the grid at the operator's discretion. The computers in a grid are not required to have the same operating system or hardware. Grids are also usually loosely connected, often in a decentralized network, rather than contained in a single location, as computers in a cluster often are.

Cloud computing

Cloud computing is another definition for the grid computing technology used in the mid to late 1990's. Surfacing in late 2007, cloud computing is used to allow services used in everyday practice to be moved onto the Internet rather than stored on a local computer.

Email has been available in both methods for quite some time, and is a very small example of cloud computing technology. With the use of services like Google's Gmail and Yahoo Mail on the rise, people no longer need to use Outlook or other desktop applications for their email. Viewing email in a browser makes it available anywhere there is an internet connection.

In 2007, other services including word processing, spreadsheets, and presentations were moved into the cloud computing arena. Google provided a word processing, spreadsheet and presentation applications in its cloud computing environment and integrated them with Gmail and Google Calendar, providing a whole office environment on the web (or in the cloud). Microsoft and other companies are also experimenting with moving programs to the cloud to make them more affordable and more accessible to computer and Internet users. The software as a service initiative (the Microsoft term for cloud computing) is a very hot item for many at Microsoft.

Cloud computing at this stage is very popular, aside from the big players like Microsoft and Google; companies have sprung up just to provide cloud based services as replacements or enhancements to applications on your PC today. A few of these companies are Zoho.com, an online office suite, Evernote, a site devoted to online note taking, and RemembertheMilk.com, online task management.

Computing technologies and new programming or development techniques change quite frequently, the goal in cloud computing seems to be to make the technology that the user sees very friendly and keep the experience simple as possible. Internet based development has skyrocketed recently with the boom in blogging and other social networking services aimed at finding new ways to help individuals and business communicate with customers and each other in the cloud computing arena.

Cloud computing is here to stay, at least for now. There are some concerns about storing personal data in the cloud and security of this information, which are quite valid. The biggest is identity theft. The companies providing cloud based services are very committed to security; however it remains the user's prerogative as to whether or not they wish to put their data in the cloud. Before discounting cloud computing, take a look at the services available and give a few of them a try. Before long, the computing environment as it exists today might just be completely cloud based.

Monday, March 15, 2010

IPv6 Description

IP Address:

An Internet Protocol (IP) address is a numerical label that is assigned to devices participating in a computer network that uses the Internet Protocol for communication between its nodes.

An IP address serves two principal functions: host or network interface identification and location addressing. Its role has been characterized as follows: "A name indicates what we seek. An address indicates where it is. A route indicates how to get there."

The designers of TCP/IP defined an IP address as a 32-bit number [1] and this system, known as Internet Protocol Version 4 or IPv4, is still in use today. However, due to the enormous growth of the Internet and the resulting depletion of available addresses, a new addressing system (IPv6), using 128 bits for the address, was developed in 1995

It was last standardized by RFC 2460 in 1998. Although IP addresses are stored as binary numbers, they are usually displayed in human-readable notations, such as 208.77.188.166 (for IPv4), and 2001:db8:0:1234:0:567:1:1 (for IPv6).

The Internet Protocol also routes data packets between networks; IP addresses specify the locations of the source and destination nodes in the topology of the routing system. For this purpose, some of the bits in an IP address are used to designate a subnetwork. The number of these bits is indicated in CIDR notation, appended to the IP address; e.g., 208.77.188.166/24.

As the development of private networks raised the threat of IPv4 address exhaustion, RFC 1918 set aside a group of private address spaces that may be used by anyone on private networks. They are often used with network address translators to connect to the global public Internet.

The Internet Assigned Numbers Authority (IANA), which manages the IP address space allocations globally, cooperates with five Regional Internet Registries (RIRs) to allocate IP address blocks to Local Internet Registries (Internet service providers) and other entities.

IPv6:

The rapid exhaustion of IPv4 address space, despite conservation techniques, prompted the Internet Engineering Task Force (IETF) to explore new technologies to expand the Internet's addressing capability. The permanent solution was deemed to be a redesign of the Internet Protocol itself. This next generation of the Internet Protocol, aimed to replace IPv4 on the Internet, was eventually named Internet Protocol Version 6 (IPv6) in 1995. The address size was increased from 32 to 128 bits or 16 octets, which, even with a generous assignment of network blocks, is deemed sufficient for the foreseeable future. Mathematically, the new address space provides the potential for a maximum of 2128, or about 3.403 × 1038 unique addresses.

The new design is not based on the goal to provide a sufficient quantity of addresses alone, but rather to allow efficient aggregation of subnet routing prefixes to occur at routing nodes. As a result, routing table sizes are smaller, and the smallest possible individual allocation is a subnet for 264 hosts, which is the square of the size of the entire IPv4 Internet. At these levels, actual address utilization rates will be small on any IPv6 network segment. The new design also provides the opportunity to separate the addressing infrastructure of a network segment—that is the local administration of the segment's available space—from the addressing prefix used to route external traffic for a network. IPv6 has facilities that automatically change the routing prefix of entire networks should the global connectivity or the routing policy change without requiring internal redesign or renumbering.

The large number of IPv6 addresses allows large blocks to be assigned for specific purposes and, where appropriate, to be aggregated for efficient routing. With a large address space, there is not the need to have complex address conservation methods as used in classless inter-domain routing (CIDR).

All modern desktop and enterprise server operating systems include native support for the IPv6 protocol, but it is not yet widely deployed in other devices, such as home networking routers, voice over Internet Protocol (VoIP) and multimedia equipment, and network peripherals.

IPv6 features include:

  • Supports source and destination addresses that are 128 bits (16 bytes) long.
  • Requires IPSec support.
  • Uses Flow Label field to identify packet flow for QoS handling by router.
  • Allows the host to send fragments packets but not routers.
  • Doesn't include a checksum in the header.
  • Uses a link-local scope all-nodes multicast address.
  • Does not require manual configuration or DHCP.
  • Uses host address (AAAA) resource records in DNS to map host names to IPv6 addresses.
  • Uses pointer (PTR) resource records in the IP6.ARPA DNS domain to map IPv6 addresses to host names.
  • Supports a 1280-byte packet size (without fragmentation).
  • Moves optional data to IPv6 extension headers.
  • Uses Multicast Neighbor Solicitation messages to resolve IP addresses to link-layer addresses.
  • Uses Multicast Listener Discovery (MLD) messages to manage membership in local subnet groups.
  • Uses ICMPv6 Router Solicitation and Router Advertisement messages to determine the IP address of the best default gateway.