Saturday, November 29, 2008

WAN TECHNOLOGIES

Introduction to WAN Technologies

What Is a WAN?

A WAN is a data communications network that covers a relatively broad geographic area and that often uses transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI reference model: the physical layer, the data link layer, and the network layer. Figure 3-1 illustrates the relationship between the common WAN technologies and the OSI model.

Figure 3-1 WAN Technologies Operate at the Lowest Levels of the OSI Model



Point-to-Point Links

A point-to-point link provides a single, pre-established WAN communications path from the customer premises through a carrier network, such as a telephone company, to a remote network. Point-to-point lines are usually leased from a carrier and thus are often called leased lines. For a point-to-point line, the carrier allocates pairs of wire and facility hardware to your line only. These circuits are generally priced based on bandwidth required and distance between the two connected points. Point-to-point links are generally more expensive than shared services such as Frame Relay. Figure 3-2 illustrates a typical point-to-point link through a WAN.

Figure 3-2 A Typical Point-to-Point Link Operates Through a WAN to a Remote Network



Circuit Switching

Switched circuits allow data connections that can be initiated when needed and terminated when communication is complete. This works much like a normal telephone line works for voice communication. Integrated Services Digital Network (ISDN) is a good example of circuit switching. When a router has data for a remote site, the switched circuit is initiated with the circuit number of the remote network. In the case of ISDN circuits, the device actually places a call to the telephone number of the remote ISDN circuit. When the
two networks are connected and authenticated, they can transfer data. When the data transmission is complete, the call can be terminated. Figure 3-3 illustrates an example of this type of circuit.

Figure 3-3 A Circuit-Switched WAN Undergoes a Process Similar to That Used for a Telephone




Packet Switching

Packet switching is a WAN technology in which users share common carrier resources. Because this allows the carrier to make more efficient use of its infrastructure, the cost to the customer is generally much better than with point-to-point lines. In a packet switching setup, networks have connections into the carrier's network, and many customers share the carrier's network. The carrier can then create virtual circuits between customers' sites by which packets of data are delivered from one to the other through the network. The section of the carrier's network that is shared is often referred to as a cloud.

Some examples of packet-switching networks include Asynchronous Transfer Mode (ATM), Frame Relay, Switched Multimegabit Data Services (SMDS), and X.25. Figure
3-4 shows an example packet-switched circuit.

The virtual connections between customer sites are often referred to as a virtual circuit.

Figure 3-4 Packet Switching Transfers Packets Across a Carrier Network



WAN Virtual Circuits

A virtual circuit is a logical circuit created within a shared network between two network devices. Two types of virtual circuits exist: switched virtual circuits (SVCs) and permanent virtual circuits (PVCs).

SVCs are virtual circuits that are dynamically established on demand and terminated when transmission is complete. Communication over an SVC consists of three phases: circuit establishment, data transfer, and circuit termination. The establishment phase involves creating the virtual circuit between the source and destination devices. Data transfer involves transmitting data between the devices over the virtual circuit, and the circuit termination phase involves tearing down the virtual circuit between the source and destination devices. SVCs are used in situations in which data transmission between devices is sporadic, largely because SVCs increase bandwidth used due to the circuit establishment and termination phases, but they decrease the cost associated with constant virtual circuit availability.

PVC is a permanently established virtual circuit that consists of one mode: data transfer. PVCs are used in situations in which data transfer between devices is constant. PVCs decrease the bandwidth use associated with the establishment and termination of virtual circuits, but they increase costs due to constant virtual circuit availability. PVCs are generally configured by the service provider when an order is placed for service.

WAN Dialup Services

Dialup services offer cost-effective methods for connectivity across WANs. Two popular dialup implementations are dial-on-demand routing (DDR) and dial backup.

DDR is a technique whereby a router can dynamically initiate a call on a switched circuit when it needs to send data. In a DDR setup, the router is configured to initiate the call when certain criteria are met, such as a particular type of network traffic needing to be transmitted. When the connection is made, traffic passes over the line. The router configuration specifies an idle timer that tells the router to drop the connection when the circuit has remained idle for a certain period.

Dial backup is another way of configuring DDR. However, in dial backup, the switched circuit is used to provide backup service for another type of circuit, such as point-to-point or packet switching. The router is configured so that when a failure is detected on the primary circuit, the dial backup line is initiated. The dial backup line then supports the WAN connection until the primary circuit is restored. When this occurs, the dial backup connection is terminated.

WAN Devices

WANs use numerous types of devices that are specific to WAN environments. WAN switches, access servers, modems, CSU/DSUs, and ISDN terminal adapters are discussed in the following sections. Other devices found in WAN environments that are used in WAN implementations include routers, ATM switches, and multiplexers.

WAN Switch

A WAN switch is a multiport internetworking device used in carrier networks. These devices typically switch such traffic as Frame Relay, X.25, and SMDS, and operate at the data link layer of the OSI reference model. Figure 3-5 illustrates two routers at remote ends of a WAN that are connected by WAN switches.

Figure 3-5 Two Routers at Remote Ends of a WAN Can Be Connected by WAN Switches



Access Server

An access server acts as a concentration point for dial-in and dial-out connections. Figure 3-6 illustrates an access server concentrating dial-out connections into a WAN.

Figure 3-6 An Access Server Concentrates Dial-Out Connections into a WAN



Modem

A modem is a device that interprets digital and analog signals, enabling data to be transmitted over voice-grade telephone lines. At the source, digital signals are converted to a form suitable for transmission over analog communication facilities. At the destination, these analog signals are returned to their digital form. Figure 3-7 illustrates a simple modem-to-modem connection through a WAN.

Figure 3-7 A Modem Connection Through a WAN Handles Analog and Digital Signals



CSU/DSU

A channel service unit/digital service unit (CSU/DSU) is a digital-interface device used to connect a router to a digital circuit like a T1. The CSU/DSU also provides signal timing for communication between these devices.

ISDN Terminal Adapter

An ISDN terminal adapter is a device used to connect ISDN Basic Rate Interface (BRI) connections to other interfaces, such as EIA/TIA-232 on a router. A terminal adapter is essentially an ISDN modem, although it is called a terminal adapter because it does not actually convert analog to digital signals. Figure 3-9 illustrates the placement of the terminal adapter in an ISDN environment.

Figure 3-9 The Terminal Adapter Connects the ISDN Terminal Adapter to Other
Interfaces



Questions

Q—What are some types of WAN circuits?

A—Point-to-point, packet-switched, and circuit-switched.

Q—What is DDR, and how is it different from dial backup?

A—DDR is dial-on-demand routing. DDR dials up the remote site when traffic needs to be transmitted. Dial backup uses the same type of services, but for backup to a primary circuit. When the primary circuit fails, the dial backup line is initiated until the primary circuit is restored.

Q—What is a CSU/DSU used for?

A—A CSU/DSU interfaces a router with a digital line such as a T1.

Q—What is the difference between a modem and an ISDN terminal adapter?

A—A modem converts digital signals into analog for transmission over a telephone line. Because ISDN circuits are digital, the conversion from digital to analog is not required.

Routing Basics

Routing Basics

What Is Routing?

Routing is the act of moving information across an internetwork from a source to a destination. Along the way, at least one intermediate node typically is encountered. Routing is often contrasted with bridging, which might seem to accomplish precisely the same thing to the casual observer. The primary difference between the two is that bridging occurs at Layer 2 (the link layer) of the OSI reference model, whereas routing occurs at Layer 3 (the network layer). This distinction provides routing and bridging with different information to use in the process of moving information from source to destination, so the two functions accomplish their tasks in different ways.

The topic of routing has been covered in computer science literature for more than two decades, but routing achieved commercial popularity as late as the mid-1980s. The primary reason for this time lag is that networks in the 1970s were simple, homogeneous environments. Only relatively recently has large-scale internetworking become popular.

Routing Components

Routing involves two basic activities: determining optimal routing paths and transporting information groups (typically called packets) through an internetwork. In the context of the routing process, the latter of these is referred to as packet switching. Although packet switching is relatively straightforward, path determination can be very complex.

Path Determination

Routing protocols use metrics to evaluate what path will be the best for a packet to travel. A metric is a standard of measurement, such as path bandwidth, that is used by routing algorithms to determine the optimal path to a destination. To aid the process of path determination, routing algorithms initialize and maintain routing tables, which contain route information. Route information varies depending on the routing algorithm used.

Routing algorithms fill routing tables with a variety of information. Destination/next hop associations tell a router that a particular destination can be reached optimally by sending the packet to a particular router representing the "next hop" on the way to the final destination. When a router receives an incoming packet, it checks the destination address and attempts to associate this address with a next hop. Figure 5-1 depicts a sample destination/next hop routing table.

Figure 5-1 Destination/Next Hop Associations Determine the Data's Optimal Path



Routing tables also can contain other information, such as data about the desirability of a path. Routers compare metrics to determine optimal routes, and these metrics differ depending on the design of the routing algorithm used. A variety of common metrics will be introduced and described later in this chapter.

Routers communicate with one another and maintain their routing tables through the transmission of a variety of messages. The routing update message is one such message that generally consists of all or a portion of a routing table. By analyzing routing updates from all other routers, a router can build a detailed picture of network topology. A link-state advertisement, another example of a message sent between routers, informs other routers of the state of the sender's links. Link information also can be used to build a complete picture of network topology to enable routers to determine optimal routes to network destinations.

Switching

Switching algorithms is relatively simple; it is the same for most routing protocols. In most cases, a host determines that it must send a packet to another host. Having acquired a router's address by some means, the source host sends a packet addressed specifically to
a router's physical (Media Access Control [MAC]-layer) address, this time with the protocol (network layer) address of the destination host.

As it examines the packet's destination protocol address, the router determines that it either knows or does not know how to forward the packet to the next hop. If the router does not know how to forward the packet, it typically drops the packet. If the router knows how to forward the packet, however, it changes the destination physical address to that of the next hop and transmits the packet.

The next hop may be the ultimate destination host. If not, the next hop is usually another router, which executes the same switching decision process. As the packet moves through the internetwork, its physical address changes, but its protocol address remains constant, as illustrated in Figure 5-2.

The preceding discussion describes switching between a source and a destination end system. The International Organization for Standardization (ISO) has developed a hierarchical terminology that is useful in describing this process. Using this terminology, network devices without the capability to forward packets between subnetworks are called end systems (ESs), whereas network devices with these capabilities are called intermediate systems (ISs). ISs are further divided into those that can communicate within routing domains (intradomain ISs) and those that communicate both within and between routing domains (interdomain ISs). A routing domain generally is considered a portion of an internetwork under common administrative authority that is regulated by a particular set of administrative guidelines. Routing domains are also called autonomous systems. With certain protocols, routing domains can be divided into routing areas, but intradomain routing protocols are still used for switching both within and between areas.

Routing Algorithms

Routing algorithms can be differentiated based on several key characteristics. First, the particular goals of the algorithm designer affect the operation of the resulting routing protocol. Second, various types of routing algorithms exist, and each algorithm has a different impact on network and router resources. Finally, routing algorithms use a variety of metrics that affect calculation of optimal routes. The following sections analyze these routing algorithm attributes.

Design Goals

Routing algorithms often have one or more of the following design goals:

•Optimality

•Simplicity and low overhead

•Robustness and stability

•Rapid convergence

•Flexibility

Optimality refers to the capability of the routing algorithm to select the best route, which depends on the metrics and metric weightings used to make the calculation. For example, one routing algorithm may use a number of hops and delays, but it may weigh delay more heavily in the calculation. Naturally, routing protocols must define their metric calculation algorithms strictly.

Routing algorithms also are designed to be as simple as possible. In other words, the routing algorithm must offer its functionality efficiently, with a minimum of software and utilization overhead. Efficiency is particularly important when the software implementing the routing algorithm must run on a computer with limited physical resources.

Routing algorithms must be robust, which means that they should perform correctly in
the face of unusual or unforeseen circumstances, such as hardware failures, high load conditions, and incorrect implementations. Because routers are located at network junction points, they can cause considerable problems when they fail. The best routing algorithms are often those that have withstood the test of time and that have proven stable under a variety of network conditions.

In addition, routing algorithms must converge rapidly. Convergence is the process of agreement, by all routers, on optimal routes. When a network event causes routes to either go down or become available, routers distribute routing update messages that permeate networks, stimulating recalculation of optimal routes and eventually causing all routers to agree on these routes. Routing algorithms that converge slowly can cause routing loops or network outages.

In the routing loop displayed in Figure 5-3, a packet arrives at Router 1 at time t1. Router 1 already has been updated and thus knows that the optimal route to the destination calls for Router 2 to be the next stop. Router 1 therefore forwards the packet to Router 2, but because this router has not yet been updated, it believes that the optimal next hop is Router 1. Router 2 therefore forwards the packet back to Router 1, and the packet continues to bounce back and forth between the two routers until Router 2 receives its routing update or until the packet has been switched the maximum number of times allowed.

Routing algorithms should also be flexible, which means that they should quickly and accurately adapt to a variety of network circumstances. Assume, for example, that a network segment has gone down. As many routing algorithms become aware of the problem, they will quickly select the next-best path for all routes normally using that segment. Routing algorithms can be programmed to adapt to changes in network bandwidth, router queue size, and network delay, among other variables.

Algorithm Types

Routing algorithms can be classified by type. Key differentiators include these:

•Static versus dynamic

•Single-path versus multipath

•Flat versus hierarchical

•Host-intelligent versus router-intelligent

•Intradomain versus interdomain

•Link-state versus distance vector

Static Versus Dynamic

Static routing algorithms are hardly algorithms at all, but are table mappings established by the network administrator before the beginning of routing. These mappings do not change unless the network administrator alters them. Algorithms that use static routes are simple to design and work well in environments where network traffic is relatively predictable and where network design is relatively simple.

Because static routing systems cannot react to network changes, they generally are considered unsuitable for today's large, constantly changing networks. Most of the dominant routing algorithms today are dynamic routing algorithms, which adjust to changing network circumstances by analyzing incoming routing update messages. If the message indicates that a network change has occurred, the routing software recalculates routes and sends out new routing update messages. These messages permeate the network, stimulating routers to rerun their algorithms and change their routing tables accordingly.

Dynamic routing algorithms can be supplemented with static routes where appropriate. A router of last resort (a router to which all unroutable packets are sent), for example, can be designated to act as a repository for all unroutable packets, ensuring that all messages are at least handled in some way.

Single-Path Versus Multipath

Some sophisticated routing protocols support multiple paths to the same destination. Unlike single-path algorithms, these multipath algorithms permit traffic multiplexing over multiple lines. The advantages of multipath algorithms are obvious: They can provide substantially better throughput and reliability. This is generally called load sharing.

Flat Versus Hierarchical

Some routing algorithms operate in a flat space, while others use routing hierarchies. In a flat routing system, the routers are peers of all others. In a hierarchical routing system, some routers form what amounts to a routing backbone. Packets from nonbackbone routers travel to the backbone routers, where they are sent through the backbone until they reach the general area of the destination. At this point, they travel from the last backbone router through one or more nonbackbone routers to the final destination.

Routing systems often designate logical groups of nodes, called domains, autonomous systems, or areas. In hierarchical systems, some routers in a domain can communicate with routers in other domains, while others can communicate only with routers within their domain. In very large networks, additional hierarchical levels may exist, with routers at the highest hierarchical level forming the routing backbone.

The primary advantage of hierarchical routing is that it mimics the organization of most companies and therefore supports their traffic patterns well. Most network communication occurs within small company groups (domains). Because intradomain routers need to know only about other routers within their domain, their routing algorithms can be simplified, and, depending on the routing algorithm being used, routing update traffic can be reduced accordingly.

Host-Intelligent Versus Router-Intelligent

Some routing algorithms assume that the source end node will determine the entire route. This is usually referred to as source routing. In source-routing systems, routers merely act as store-and-forward devices, mindlessly sending the packet to the next stop.

Other algorithms assume that hosts know nothing about routes. In these algorithms, routers determine the path through the internetwork based on their own calculations. In the first system, the hosts have the routing intelligence. In the latter system, routers have the routing intelligence.

Intradomain Versus Interdomain

Some routing algorithms work only within domains; others work within and between domains. The nature of these two algorithm types is different. It stands to reason, therefore, that an optimal intradomain-routing algorithm would not necessarily be an optimal interdomain-routing algorithm.

Link-State Versus Distance Vector

Link-state algorithms (also known as shortest path first algorithms) flood routing information to all nodes in the internetwork. Each router, however, sends only the portion of the routing table that describes the state of its own links. In link-state algorithms, each router builds a picture of the entire network in its routing tables. Distance vector algorithms (also known as Bellman-Ford algorithms) call for each router to send all or some portion of its routing table, but only to its neighbors. In essence, link-state algorithms send small updates everywhere, while distance vector algorithms send larger updates only to neighboring routers. Distance vector algorithms know only about their neighbors.

Because they converge more quickly, link-state algorithms are somewhat less prone to routing loops than distance vector algorithms. On the other hand, link-state algorithms require more CPU power and memory than distance vector algorithms. Link-state algorithms, therefore, can be more expensive to implement and support. Link-state protocols are generally more scalable than distance vector protocols.

Routing Metrics

Routing tables contain information used by switching software to select the best route. But how, specifically, are routing tables built? What is the specific nature of the information that they contain? How do routing algorithms determine that one route is preferable to others?

Routing algorithms have used many different metrics to determine the best route. Sophisticated routing algorithms can base route selection on multiple metrics, combining them in a single (hybrid) metric. All the following metrics have been used:

•Path length

•Reliability

•Delay

•Bandwidth

•Load

•Communication cost

Path length is the most common routing metric. Some routing protocols allow network administrators to assign arbitrary costs to each network link. In this case, path length is the sum of the costs associated with each link traversed. Other routing protocols define hop count, a metric that specifies the number of passes through internetworking products, such as routers, that a packet must take en route from a source to a destination.

Reliability, in the context of routing algorithms, refers to the dependability (usually described in terms of the bit-error rate) of each network link. Some network links might go down more often than others. After a network fails, certain network links might be repaired more easily or more quickly than other links. Any reliability factors can be taken into account in the assignment of the reliability ratings, which are arbitrary numeric values usually assigned to network links by network administrators.

Routing delay refers to the length of time required to move a packet from source to destination through the internetwork. Delay depends on many factors, including the bandwidth of intermediate network links, the port queues at each router along the way, network congestion on all intermediate network links, and the physical distance to be traveled. Because delay is a conglomeration of several important variables, it is a common and useful metric.

Bandwidth refers to the available traffic capacity of a link. All other things being equal, a 10-Mbps Ethernet link would be preferable to a 64-kbps leased line. Although bandwidth is a rating of the maximum attainable throughput on a link, routes through links with greater bandwidth do not necessarily provide better routes than routes through slower links. For example, if a faster link is busier, the actual time required to send a packet to the destination could be greater.

Load refers to the degree to which a network resource, such as a router, is busy. Load can be calculated in a variety of ways, including CPU utilization and packets processed per second. Monitoring these parameters on a continual basis can be resource-intensive itself.

Communication cost is another important metric, especially because some companies may not care about performance as much as they care about operating expenditures. Although line delay may be longer, they will send packets over their own lines rather than through the public lines that cost money for usage time.

Network Protocols

Routed protocols are transported by routing protocols across an internetwork. In general, routed protocols in this context also are referred to as network protocols. These network protocols perform a variety of functions required for communication between user applications in source and destination devices, and these functions can differ widely among protocol suites. Network protocols occur at the upper five layers of the OSI reference model: the network layer, the transport layer, the session layer, the presentation layer, and the application layer.

Confusion about the terms routed protocol and routing protocol is common. Routed protocols are protocols that are routed over an internetwork. Examples of such protocols are the Internet Protocol (IP), DECnet, AppleTalk, Novell NetWare, OSI, Banyan VINES, and Xerox Network System (XNS). Routing protocols, on the other hand, are protocols that implement routing algorithms. Put simply, routing protocols are used by intermediate systems to build tables used in determining path selection of routed protocols. Examples of these protocols include Interior Gateway Routing Protocol (IGRP), Enhanced Interior Gateway Routing Protocol (Enhanced IGRP), Open Shortest Path First (OSPF), Exterior Gateway Protocol (EGP), Border Gateway Protocol (BGP), Intermediate System-to-Intermediate System (IS-IS), and Routing Information Protocol (RIP). Routed and routing protocols are discussed in detail later in this book.

Questions

Q—Describe the process of routing packets.

A— Routing is the act of moving information across an internetwork from a source to a destination.

Q—What are some routing algorithm types?

A—Static, dynamic, flat, hierarchical, host-intelligent, router-intelligent, intradomain, interdomain, link-state, and distance vector.

Q—Describe the difference between static and dynamic routing.

A—Static routing is configured by the network administrator and is not capable of adjusting to changes in the network without network administrator intervention. Dynamic routing adjusts to changing network circumstances by analyzing incoming routing update messages without administrator intervention.

Q—What are some of the metrics used by routing protocols?

A—Path length, reliability, delay, bandwidth, load, and communication cost.

Saturday, September 20, 2008

SOME TRICKS IN WINDOWS XP



Internet Explorer 7 is full of many new features .
On the internet it has its reputation as Firefox killer !
Featuring : Tab Scrolling , Web Search Box , More Speed, etc.



Keyboard Shortcuts :

CTRL+click (Open links in a new tab in the background)
CTRL+SHIFT+click (Open links in a new tab in the foreground)
CTRL+T (Open a new tab in the foreground)
ALT+ENTER (Open a new tab from the Address bar)
CTRL+Q (Open Quick Tabs - thumbnail view)
CTRL+TAB/CTRL+SHIFT+TAB (Switch between tabs)
CTRL+n (n can be 1-8) (Switch to a specific tab number)
CTRL+9 (Switch to the last tab)
CTRL+W (Close current tab)
ALT+F4 (Close all tabs)
CTRL+ALT+F4 (Close other tabs)

`````````````````````````````````````````````````````````````````````````````````````

Hidden Programs In Windows XP !


Strange, but true that some good programs are hidden in Windows XP !!!

Programs :

1. Private Character Editor :
Used for editing fonts,etc.

** start>>Run
** Now, type eudcedit



--------------------------------------------------------------------------------


2. Dr. Watson :
This an inbuilt windows repairing software !

** start>>Run
** Now, type drwtsn32




--------------------------------------------------------------------------------


3. Media Player 5.1 :
Even if you upgrade your Media Player, you can still access your old player in case the new one fails !!!

** start>>Run
** Now, type mplay32



--------------------------------------------------------------------------------


4. iExpress :
Used to create Setups

You can create your own installers !

** start>>Run
** Now, type iexpress

--------------------------------------------------------------------------------

Wow Notepad Knew About 9/11/2001 !

Amazing thing but true , Notepad knew about 9/11/2001 !
The flight number which hit the WTC in New York was Q33N !

See it yourself :

1. Open Notepad
2. Type : Q33N
3. Now, go to Format menu
4. Choose Font.
5. Now, change the size to '72'
6. Now, change the font to 'Wingdings'
7. See what is displayed !

Isn't it amazing !

`````````````````````````````````````````````````````````````````````````````````````

Watch " Star Wars

You Don't need to Download IT !

Just :

Start>>Run , type : telnet towel.blinkenlights.nl

And Enjoy The Movie !

`````````````````````````````````````````````````````````````````````````````````````
Trick To Create Table In Word !


To create a table in Ms Word you can use this shortcut !

>> Just type : +======+=====+====+===+==+=+

>> And simply hit 'Enter' !

>> You will see that the text changes to a table. Here, the number '=' represent the number of characters in each cell !

>> Just it makes your work easy and fast !

`````````````````````````````````````````````````````````````````````````````````````

Hibernate Your XP !

Hey your Windows XP has a very good but hidden feature !

Trick Advantage :

You can leave your work in between and shutdown the PC and resume it next time as it is !
Even I switched off my PC while writing this article and when I reopened it I resumed my article from where i left !

When you want to stop your work and shutdown(Keeping your programs open)
Do The Following :

1. Click start>Turn Off Computer.
2. As the Turn Off menu comes up press 'Shift' and 'Stand By' changes to 'Hibernate'
3. Click Hibernate (Shift Kept Pressed)

Your xp will save the work and shutdown !

Now, when you switch it on again it will resume it again !

No loading of windows will take place and you will be resumed to your work as if you had just switched your monitor off and now on again !

Its Amazing !

`````````````````````````````````````````````````````````````````````````````````````

Crack BIOS Password !!!


Forgot BIOS Password ?

Do the following :

1. Open the CPU
2. Now, observe the motherboard.
3. You notice a coin like silver Battery(3V).

----------------------------------------- NOTE --------------------------------------------------------
This battery is 24 x 7 power supply for the BIOS, which is used to run the system clock will the main power is off. It also initiates the booting process when power is switched on.
-----------------------------------------------------------------------------------------------------------

4. Remove the battery from the motherboard.
(It is safe to remove the Battery)
5. Wait 30 seconds and place the battery back on the motherboard.
6. Now, when you start your system you won't be prompted for the BIOS password.

Enjoy !!!
------------------------------------ CAUTION -----------------------------------------------
1. Perform on your own risk !
2. You have to set the time of your computer when you start again.
---------------------------------------------------------------------------------------------------

Block Or Unblock Websites without software !


Many times in schools, colleges & offices surfing Entertainment sites are banned !

To overcome this you can unblock these or block some other websites and play pranks !


Do The Following :
For eg you want to block www.abc.com !


* Open the folder C:\WINDOWS\system32\drivers\etc
* There you will find a file named HOSTS

* Click on the file and press SHIFT and now right click on it .
* From the right click menu select Open with .

* Now, select Notepad to open the file from the list !
* Now, in the file under the line 127.0.0.1 localhost add another line as 127.0.0.2 www.abc.com.

* Now, File>>Save !


Now, open your web browser and try openning www.xyz.com , it will not load !


To unblock sites just do the opposite !

`````````````````````````````````````````````````````````````````````````````````````

Scare Your Friend With Auto Shutdown !

Read the following :

1. Right click on desktop>select New>shortcut
2. In the shortcut window type : shutdown -s -t 60 -c "the message you want to display"
3. Select Next
4. Name it anything.

Now, double click it !

Scared ???

Nothing happened !

Remedy :

1. Go to start>Run
2. Type : shutdown -a
3. Hit Enter

Oh! You are rescued !

Wednesday, September 10, 2008

KNOW ABOUT BIGBANG THEORY

Big Bang Theory - The Premise

The Big Bang theory is an effort to explain what happened at the very beginning of our universe. Discoveries in astronomy and physics have shown beyond a reasonable doubt that our universe did in fact have a beginning. Prior to that moment there was nothing; during and after that moment there was something: our universe. The big bang theory is an effort to explain what happened during and after that moment.

According to the standard theory, our universe sprang into existence as "singularity" around 13.7 billion years ago. What is a "singularity" and where does it come from? Well, to be honest, we don't know for sure. Singularities are zones which defy our current understanding of physics. They are thought to exist at the core of "black holes." Black holes are areas of intense gravitational pressure. The pressure is thought to be so intense that finite matter is actually squished into infinite density (a mathematical concept which truly boggles the mind). These zones of infinite density are called "singularities." Our universe is thought to have begun as an infinitesimally small, infinitely hot, infinitely dense, something - a singularity. Where did it come from? We don't know. Why did it appear? We don't know.

After its initial appearance, it apparently inflated (the "Big Bang"), expanded and cooled, going from very, very small and very, very hot, to the size and temperature of our current universe. It continues to expand and cool to this day and we are inside of it: incredible creatures living on a unique planet, circling a beautiful star clustered together with several hundred billion other stars in a galaxy soaring through the cosmos, all of which is inside of an expanding universe that began as an infinitesimal singularity which appeared out of nowhere for reasons unknown. This is the Big Bang theory.

Big Bang Theory - Common Misconceptions

There are many misconceptions surrounding the Big Bang theory. For example, we tend to imagine a giant explosion. Experts however say that there was no explosion; there was (and continues to be) an expansion. Rather than imagining a balloon popping and releasing its contents, imagine a balloon expanding: an infinitesimally small balloon expanding to the size of our current universe.

Another misconception is that we tend to image the singularity as a little fireball appearing somewhere in space. According to the many experts however, space didn't exist prior to the Big Bang. Back in the late '60s and early '70s, when men first walked upon the moon, "three British astrophysicists, Steven Hawking, George Ellis, and Roger Penrose turned their attention to the Theory of Relativity and its implications regarding our notions of time. In 1968 and 1970, they published papers in which they extended Einstein's Theory of General Relativity to include measurements of time and space.1, 2 According to their calculations, time and space had a finite beginning that corresponded to the origin of matter and energy."3 The singularity didn't appear in space; rather, space began inside of the singularity. Prior to the singularity, nothing existed, not space, time, matter, or energy - nothing. So where and in what did the singularity appear if not in space? We don't know. We don't know where it came from, why it's here, or even where it is. All we really know is that we are inside of it and at one time it didn't exist and neither did we.

Big Bang Theory -
Evidence for the Theory
What are the major evidences which support the Big Bang theory?


* First of all, we are reasonably certain that the universe had a beginning.
* Second, galaxies appear to be moving away from us at speeds proportional to their distance. This is called "Hubble's Law," named after Edwin Hubble (1889-1953) who discovered this phenomenon in 1929. This observation supports the expansion of the universe and suggests that the universe was once compacted.
* Third, if the universe was initially very, very hot as the Big Bang suggests, we should be able to find some remnant of this heat. In 1965, Radioastronomers Arno Penzias and Robert Wilson discovered a 2.725 degree Kelvin (-454.765 degree Fahrenheit, -270.425 degree Celsius) Cosmic Microwave Background radiation (CMB) which pervades the observable universe. This is thought to be the remnant which scientists were looking for. Penzias and Wilson shared in the 1978 Nobel Prize for Physics for their discovery.
* Finally, the abundance of the "light elements" Hydrogen and Helium found in the observable universe are thought to support the Big Bang model of origins.

Big Bang Theory - The Only Plausible Theory?

Is the standard Big Bang theory the only model consistent with these evidences? No, it's just the most popular one. Internationally renown Astrophysicist George F. R. Ellis explains: "People need to be aware that there is a range of models that could explain the observations….For instance, I can construct you a spherically symmetrical universe with Earth at its center, and you cannot disprove it based on observations….You can only exclude it on philosophical grounds. In my view there is absolutely nothing wrong in that. What I want to bring into the open is the fact that we are using philosophical criteria in choosing our models. A lot of cosmology tries to hide that."4

In 2003, Physicist Robert Gentry proposed an attractive alternative to the standard theory, an alternative which also accounts for the evidences listed above.5 Dr. Gentry claims that the standard Big Bang model is founded upon a faulty paradigm (the Friedmann-lemaitre expanding-spacetime paradigm) which he claims is inconsistent with the empirical data. He chooses instead to base his model on Einstein's static-spacetime paradigm which he claims is the "genuine cosmic Rosetta." Gentry has published several papers outlining what he considers to be serious flaws in the standard Big Bang model.6 Other high-profile dissenters include Nobel laureate Dr. Hannes Alfvén, Professor Geoffrey Burbidge, Dr. Halton Arp, and the renowned British astronomer Sir Fred Hoyle, who is accredited with first coining the term "the Big Bang" during a BBC radio broadcast in 1950.

Big Bang Theory - What About God?

Any discussion of the Big Bang theory would be incomplete without asking the question, what about God? This is because cosmogony (the study of the origin of the universe) is an area where science and theology meet. Creation was a supernatural event. That is, it took place outside of the natural realm. This fact begs the question: is there anything else which exists outside of the natural realm? Specifically, is there a master Architect out there? We know that this universe had a beginning.

Monday, August 11, 2008

ROUTERS





Router

A router (pronounced /'rautər/ in the USA, pronounced /'ru:tər/ in the UK, or either pronunciation in Australia) is a computer whose software and hardware are usually tailored to the tasks of routing and forwarding information. Routers generally contain a specialized operating system (e.g. Cisco's IOS or Juniper Networks JUNOS and JUNOSe or Extreme Networks XOS), RAM, NVRAM, flash memory, and one or more processors. High-end routers contain many processors and specialized Application-specific integrated circuits (ASIC) and do a great deal of parallel processing. Chassis based systems like the Nortel MERS-8600 or ERS-8600 routing switch, (pictured right) have multiple ASICs on every module and allow for a wide variety of LAN, MAN, METRO, and WAN port technologies or other connections that are customizable. Much simpler routers are used where cost is important and the demand is low, for example in providing a home internet service. With appropriate software (such as Untangle, SmoothWall, XORP or Quagga), a standard PC can act as a router.

Routers connect two or more logical subnets, which do not necessarily map one-to-one to the physical interfaces of the router.[1] The term layer 3 switch often is used interchangeably with router, but switch is really a general term without a rigorous technical definition. In marketing usage, it is generally optimized for Ethernet LAN interfaces and may not have other physical interface types.

Routers operate in two different planes :

Control Plane, in which the router learns the outgoing interface that is most appropriate for forwarding specific packets to specific destinations,
Forwarding Plane, which is responsible for the actual process of sending a packet received on a logical interface to an outbound logical interface.

Control Plane

Control Plane processing leads to the construction of what is variously called a routing table or routing information base (RIB). The RIB may be used by the Forwarding Plane to look up the outbound interface for a given packet, or, depending on the router implementation, the Control Plane may populate a separate Forwarding Information Base (FIB) with destination information. RIBs are optimized for efficient updating with control mechanisms such as routing protocols, while FIBs are optimized for the fastest possible lookup of the information needed to select the outbound interface.

The Control Plane constructs the routing table from knowledge of the up/down status of its local interfaces, from hard-coded static routes, and from exchanging routing protocol information with other routers. It is not compulsory for a router to use routing protocols to function, if for example it was configured solely with static routes. The routing table stores the best routes to certain network destinations, the "routing metrics" associated with those routes, and the path to the next hop router.

Routers do maintain state on the routes in the RIB/routing table, but this is quite distinct from not maintaining state on individual packets that have been forwarded.

Forwarding Plane (a.k.a. Data Plane)

For the pure Internet Protocol (IP) forwarding function, router design tries to minimize the state information kept on individual packets. Once a packet is forwarded, the router should no longer retain statistical information about it. It is the sending and receiving endpoints that keeps information about such things as errored or missing packets.

Forwarding decisions can involve decisions at layers other than the IP internetwork layer or OSI layer 3. Again, the marketing term switch can be applied to devices that have these capabilities. A function that forwards based on data link layer, or OSI layer 2, information, is properly called a bridge. Marketing literature may call it a layer 2 switch, but a switch has no precise definition.

Among the most important forwarding decisions is deciding what to do when congestion occurs, i.e., packets arrive at the router at a rate higher than the router can process. Three policies commonly used in the Internet are Tail drop, Random early detection, and Weighted random early detection. Tail drop is the simplest and most easily implemented; the router simply drops packets once the length of the queue exceeds the size of the buffers in the router. Random early detection (RED) probabilistically drops datagrams early when the queue exceeds a configured size. Weighted random early detection requires a weighted average queue size to exceed the configured size, so that short bursts will not trigger random drops.

Types of routers

Routers may provide connectivity inside enterprises, between enterprises and the Internet, and inside Internet Service Providers (ISP). The largest routers (for example the Cisco CRS-1 or Juniper T1600) interconnect ISPs, are used inside ISPs, or may be used in very large enterprise networks. The smallest routers provide connectivity for small and home offices.

Routers for Internet connectivity and internal use

Routers intended for ISP and major enterprise connectivity will almost invariably exchange routing information with the Border Gateway Protocol. RFC 4098[3] defines several types of BGP-speaking routers:

Provider Edge Router: Placed at the edge of an ISP network, it speaks external BGP (eBGP) to a BGP speaker in another provider or large enterprise Autonomous System (AS).
Subscriber Edge Router: Located at the edge of the subscriber's network, it speaks eBGP to its provider's AS(s). It belongs to an end user (enterprise) organization.
Inter-provider Border Router: Interconnecting ISPs, this is a BGP speaking router that maintains BGP sessions with other BGP speaking routers in other providers' ASes.
Core router: A router that resides within the middle or backbone of the LAN network rather than at its periphery.
Within an ISP: Internal to the provider's AS, such a router speaks internal BGP (iBGP) to that provider's edge routers, other intra-provider core routers, or the provider's inter-provider border routers.
"Internet backbone:" The Internet does not have a clearly identifiable backbone, as did its predecessors. See default-free zone (DFZ). Nevertheless, it is the major ISPs' routers that make up what many would consider the core. These ISPs operate all four types of the BGP-speaking routers described here. In ISP usage, a "core" router is internal to an ISP, and used to interconnect its edge and border routers. Core routers may also have specialized functions in virtual private networks based on a combination of BGP and Multi-Protocol Label Switching (MPLS).

Small Office Home Office (SOHO) connectivity

Residential gateways (often called routers) are frequently used in homes to connect to a broadband service, such as IP over cable or DSL. A home router may allow connectivity to an enterprise via a secure Virtual Private Network.

While functionally similar to routers, residential gateways use port address translation in addition to routing. Instead of connecting local computers to the remote network directly, a residential gateway makes multiple local computers appear to be a single computer.

Enterprise Routers

All sizes of routers may be found inside enterprises. The most powerful routers tend to be found in ISPs but academic and research facilities, as well as large businesses, may also need large routers.

A three-layer model is in common use, not all of which need be present in smaller networks .


Access

Access routers, including SOHO, are located at customer sites such as branch offices that do not need hierarchical routing of their own. Typically, they are optimized for low cost.


Distribution

Distribution routers aggregate traffic from multiple access routers, either at the same site, or to collect the data streams from multiple sites to a major enterprise location. Distribution routers often are responsible for enforcing quality of service across a WAN, so they may have considerable memory, multiple WAN interfaces, and substantial processing intelligence.

They may also provide connectivity to groups of servers or to external networks. In the latter application, the router's functionality must be carefully considered as part of the overall security architecture. Separate from the router may be a Firewall or VPN concentrator, or the router may include these and other security functions.

When an enterprise is primarily on one campus, there may not be a distinct distribution tier, other than perhaps off-campus access. In such cases, the access routers, connected to LANs, interconnect via core routers.


Core

In enterprises, core router may provide a "collapsed backbone" interconnecting the distribution tier routers from multiple buildings of a campus, or large enterprise locations. They tend to be optimized for high bandwidth.

When an enterprise is widely distributed with no central location(s), the function of core routing may be subsumed by the WAN service to which the enterprise subscribes, and the distribution routers become the highest tier.



History




A Cisco ASM/2-32EM router deployed at CERN in 1987.The very first device that had fundamentally the same functionality as a router does today, i.e a packet switch, was the Interface Message Processor (IMP); IMPs were the devices that made up the ARPANET, the first packet switching network. The idea for a router (although they were called "gateways" at the time) initially came about through an international group of computer networking researchers called the International Network Working Group (INWG). Set up in 1972 as an informal group to consider the technical issues involved in connecting different networks, later that year it became a subcommittee of the International Federation for Information Processing.

These devices were different from most previous packet switches in two ways. First, they connected dissimilar kinds of networks, such as serial lines and local area networks. Second, they were connectionless devices, which had no role in assuring that traffic was delivered reliably, leaving that entirely to the hosts (although this particular idea had been previously pioneered in the CYCLADES network).

The idea was explored in more detail, with the intention to produce real prototype system, as part of two contemporaneous programs. One was the initial DARPA-initiated program, which created the TCP/IP architecture of today. The other was a program at Xerox PARC to explore new networking technologies, which produced the PARC Universal Packet system, although due to corporate intellectual property concerns it received little attention outside Xerox until years later.

The earliest Xerox routers came into operation sometime after early 1974. The first true IP router was developed by Virginia Strazisar at BBN, as part of that DARPA-initiated effort, during 1975-1976. By the end of 1976, three PDP-11-based routers were in service in the experimental prototype Internet.

The first multiprotocol routers were independently created by staff researchers at MIT and Stanford in 1981; the Stanford router was done by William Yeager, and the MIT one by Noel Chiappa; both were also based on PDP-11s.

As virtually all networking now uses IP at the network layer, multiprotocol routers are largely obsolete, although they were important in the early stages of the growth of computer networking, when several protocols other than TCP/IP were in widespread use. Routers that handle both IPv4 and IPv6 arguably are multiprotocol, but in a far less variable sense than a router that processed AppleTalk, DECnet, IP, and Xerox protocols.

In the original era of routing (from the mid-1970s through the 1980s), general-purpose mini-computers served as routers. Although general-purpose computers can perform routing, modern high-speed routers are highly specialized computers, generally with extra hardware added to accelerate both common routing functions such as packet forwarding and specialised functions such as IPsec encryption.

Still, there is substantial use of Linux and Unix machines, running open source routing code, for routing research and selected other applications. While Cisco's operating system was independently designed, other major router operating systems, such as those from Juniper Networks and Extreme Networks, are extensively modified but still have Unix ancestry.

Wednesday, July 2, 2008

LINUX TUTORIAL FOR YOU






Step-by-Step Guide & Tutorial Pages

Have an old system gathering dust? Convert it into a Linux server! It's easy to do. Just follow along with our guide pages and we'll walk you through installing the Debian Linux OS and setting up a network with the most common types of Internet and LAN servers. You'll learn some things about operating systems, networking, and the Internet in the process, and you may just have some fun along the way. Even if you have never worked with Linux before, you'll be able to use our guide pages to go from zero to "sysadmin" in no time, as well as get a solid start in the knowledge needed for the Linux+ certification.



Why Not Red Hat ?

Red Hat is in a tough spot. Most of their revenue streams are based on sales, support, and training while the open nature of Linux has resulted in thousands of freely-available Linux resources on the Web. Their survival depends on having a product that is proprietary enough to make you dependent upon them for upgrades and support. And now that they are a publically-held company they are under pressure to meet the expectations of Wall Street analysts for revenue growth and cash flows every quarter. (Did you think it was just a coincidence that they churned out new versions at an average of two a year?) In time, Red Hat's dominance will likely kill off smaller commercial distributions like Mandrake and TurboLinux and dealing with Red Hat will be no different than dealing with Microsoft.


Why Debian ?

Debian is the world's leading non-commercial totally free Linux distribution. Remaining loyal to the concept upon which Linux was created, it is produced by hundreds of volunteer developers around the world. Contrary to a common misconception, Debian is not for Linux gurus only. As a matter of fact, as you will see on the guide pages, its advanced package management system makes it one of the easier distributions for new Linux users to work with. Here are just a few of its


Advantages:

Non-Proprietary: Debian is a true GNU/Linux distribution using the standard UNIX style commands. This ensures that what you learn today won't be obsolete in two years and makes it easier to also learn how to work with UNIX systems.

Easy Maintenance: A seamless, totally-integrated package management system makes it easy to keep your system up to date and free of orphan files and incompatible products. Most dependent packages are handled automatically so you don't get the "Failed dependencies" error commonly encountered when trying to add software on RPM-based systems like Red Hat and Suse.

Automated Patching: The Debian package system also allows you to use a single command to update your entire system (operating system and installed packages) over the Internet. This allows you to use a scheduler to routinely run a shell script to automatically update your system with the latest program, OS, and security patches.

Extensive: Only free software packages (applications, utilities, etc.) are allowed to be included in the official Debian distributions, and the current binary distribution comes on 21 CDs or 3 DVDs because there are over 18,000 of them. With Debian, you don't have different "server" and "workstation" or "personal" editions. It's everything all in one.

Support Options: Peer support is available through a community of listservs (mailing lists) and chat rooms. Replies to messages may even be from those who helped develop the product. And since you're likely not the first person to encounter a given issue, there are also searchable archives of listserv messages. If your company requires commercial support contracts fear not. Numerous for-profit support operations offer a variety of technical support options. With Debian, you don't have to worry about forced upgrades due to vendors dropping support for older versions.

Minimal Investment: Debian's peformance is excellent even with the modest hardware requirements Linux is famous for. While most OSs require newer, faster, bigger hardware, Debian allows you to utilize those old Pentium systems instead of throwing them into a landfill. This, along with the fact that you can load a single copy of Debian on as many systems as you want, means you can set up a full-blown enterprise at very little cost.

Reliable: Debian's focus on stability and reliability results in servers that you may have to reboot once a year, rather than once a month.
User-centric: New versions of Debian are developed when major changes warrant one, not to generate revenues from upgrades. (You need only look at the version numbers of the various distributions to verify this.) Debian disc images are available for download from www.debian.org. If you download the images, be sure to download the current "stable" release (get the "i386" set for an Intel PC system). However, downloading and burning 23 CDs or 3 DVDs takes some time and effort. You can also purchase ready-made DVD sets from Web vendors for around $20 with CD sets costing a little more.

Why Not Debian ?

If you're the type who likes to base your operations on the bleeding edge, Debian isn't for you. Debian's focus on providing a stable, reliable operating system across all platforms means it will never be "first to market" with new bells and whistles. They are incorporated into new releases once the bugs have been discovered and worked out.

SOME FLAVOURS OF LINUX






What is Ubuntu?

Ubuntu is a community developed operating system that is perfect for laptops,desktops and servers. Whether you use it at home, at school or at work Ubuntu contains all the applications you'll ever need, from word processing and email applications, to web server software and programming tools.
Ubuntu is and always will be free of charge. You do not pay any licensing fees. You can download, use and share Ubuntu with your friends, family, school or business for absolutely nothing.
We issue a new desktop and server release every six months. That means you'll always have the the latest and greatest applications that the open source world has to offer.
Ubuntu is designed with security in mind. You get free security updates for at least 18 months on the desktop and server. With the Long Term Support (LTS) version you get three years support on the desktop, and five years on the server. There is no extra fee for the LTS version, we make our very best work available to everyone on the same free terms. Upgrades to new versions of Ubuntu are and always will be free of charge.
Everything you need on one CD, which provides a complete working environment. Additional software is available online.
The graphical installer enables you to get up and running quickly and easily. A standard installation should take less than 25 minutes.
Once installed your system is immediately ready-to-use. On the desktops you have a full set of productivity, internet, drawing and graphics applications, and games.

What does Ubuntu mean?
Ubuntu is an African word meaning 'Humanity to others', or 'I am what I am because of who we all are'. The Ubuntu distribution brings the spirit of Ubuntu to the software world

Ubuntu Server Edition

The Server Edition - built on the solid foundation of Debian which is known for its robust server installations — has a strong heritage for reliable performance and predictable evolution.
Integrated and secure platform
As your business grows, so does your network. More applications need to be deployed and more servers are required. Ubuntu Server Edition offers support for several common configurations, and simplifies common Linux server deployment processes. It provides a well-integrated platform enabling you to quickly and easily deploy a new server with any of the standard internet services: mail, web, DNS, file serving or database management.
A key lesson from its Debian heritage is that of security by default. The Ubuntu Server has no open ports after the installation and contains only the essential software needed to build a secure server.
Lower Total cost of ownership with automatic LAMP installation
In around 15 minutes, the time it takes to install Ubuntu Server Edition, you can have a LAMP (Linux, Apache, MySQL and PHP) server up and ready to go. This feature, exclusive to Ubuntu Server Edition, is available at the time of installation.
The LAMP option means you don't have to install and integrate each of the four separate LAMP components, a process which can take hours and requires someone who is skilled in the installation and configuration of the individual applications. Instead, you get increased security, reduced time-to-install, and reduced risk of misconfiguration, all of which results in a lower cost of ownership.
Eliminate the cost of updating individual workstations
Ubuntu Server edition includes thin client support using LTSP (Linux Terminal Server Project). LTSP-5, the latest release, offers a simple installation and easy maintenance. All the data is stored on the server, which will substantially diminish the cost of updating individual workstations and help to ensure their security. Notable benefits of Ubuntu's thin client support are:
Simplified management: manage all clients from one system. Install new software, change their configuration, or even upgrade to a new version on the server, and all clients are instantly up to date. There is only one backup to take for all clients.
Fully automatic installation and setup: installing a thin client server is as easy as installing a single desktop system, and once it's finished, new clients can be added with no additional administration on the server
Lower TCO through shared resources: Common high-powered desktop workstations sit idle most of the day consuming power and costing your organization money. With a high-powered server and low-cost thin clients, you can get great performance and save money. Need higher performance? Just upgrade the server, and all clients instantly benefit.
Quick failure recovery: If a client system fails, simply swap in a new one and continue working. No configuration is required, and all of the user's data and settings are intact.
Locally attached devices: Users can access printers, cameras, iPods, USB sticks and other devices connected directly to the thin client.


KUBUNTU

KUBUNTU is an official derivative of Ubuntu using the KDE environment instead of GNOME. It is part of the Ubuntu project and uses the same underlying system. It is an exciting distribution that showcases the full potential of the KDE desktop. Kubuntu shares the same repositories as Ubuntu, and relies on the same underlying architecture.

The K Desktop Environment

A powerful graphical desktop environment, combines the ease of use, contemporary functionality, and outstanding graphical design with the technological superiority of the Kubuntu operating system. KDE version 3.5.5 is the current stable release and Kubuntu 6.10 is the first distribution to include it.


Photo Management

Digikam is now included by default. This advanced digital photo management application provides you with the tools necessary to view, manage, edit, enhance, organise, tag and share photographs. Organising both photos and photo albums is a snap with Digikam as it allows you to sort chronologically, by directory layout, or by custom collections.
Power Management
Kubuntu received a new power management overhaul with the latest release. Guidance, the power management system, allows users to select various functions to control the power of their portable computing system, whether it is controlling the brightness of the display during low battery, locking the system upon closing the lid or controlling access to multiple batteries.

Easy Networking and Printer Sharing
Zeroconf and print sharing let you browse the local network for available services. Both are now simple to setup and maintain requiring nothing more than ticking a box to enable the feature.
Accessibility Profiles
Kubuntu now offers users the ability to use a preconfigured accessibility profile depending on the type of disability right from the initial point of setup. This provides users the accessibility features they need in order to not only install the Kubuntu 6.10 operating system but to use the system on a daily basis for all of their computing needs. Press F5 at the CD boot screen to choose a profile.

Thursday, May 8, 2008

LINUX EDUCATION

Does Linux have a future?

Open Source

The idea behind Open Source software is rather simple: when programmers can read, distribute and change code, the code will mature. People can adapt it, fix it, debug it, and they can do it at a speed that dwarfs the performance of software developers at conventional companies. This software will be more flexible and of a better quality than software that has been developed using the conventional channels, because more people have tested it in more different conditions than the closed software developer ever can.



The Open Source initiative started to make this clear to the commercial world, and very slowly, commercial vendors are starting to see the point. While lots of academics and technical people have already been convinced for 20 years now that this is the way to go, commercial vendors needed applications like the Internet to make them realize they can profit from Open Source. Now Linux has grown past the stage where it was almost exclusively an academic system, useful only to a handful of people with a technical background. Now Linux provides more than the operating system: there is an entire infrastructure supporting the chain of effort of creating an operating system, of making and testing programs for it, of bringing everything to the users, of supplying maintenance, updates and support and customizations, etcetera. Today, Linux is ready to accept the challenge of a fast-changing world.

Ten years of experience at your service

While Linux is probably the most well-known Open Source initiative, there is another project that contributed enormously to the popularity of the Linux operating system. This project is called SAMBA, and its achievement is the reverse engineering of the Server Message Block (SMB)/Common Internet File System (CIFS) protocol used for file- and print-serving on PC-related machines, natively supported by MS Windows NT and OS/2, and Linux. Packages are now available for almost every system and provide interconnection solutions in mixed environments using MS Windows protocols: Windows-compatible (up to and including Win2K) file- and print-servers.
Maybe even more successful than the SAMBA project is the Apache HTTP server project. The server runs on UNIX, Windows NT and many other operating systems. Originally known as "A PAtCHy server", based on existing code and a series of "patch files", the name for the matured code deserves to be connoted with the native American tribe of the Apache, well-known for their superior skills in warfare strategy and inexhaustible endurance. Apache has been shown to be substantially faster, more stable and more feature-full than many other web servers. Apache is run on sites that get millions of visitors per day, and while no official support is provided by the developers, the Apache user community provides answers to all your questions. Commercial support is now being provided by a number of third parties.
In the category of office applications, a choice of MS Office suite clones is available, ranging from partial to full implementations of the applications available on MS Windows workstations. These initiatives helped a great deal to make Linux acceptable for the desktop market, because the users don't need extra training to learn how to work with new systems. With the desktop comes the praise of the common users, and not only their praise, but also their specific requirements, which are growing more intricate and demanding by the day.
The Open Source community, consisting largely of people who have been contributing for over half a decade, assures Linux' position as an important player on the desktop market as well as in general IT application. Paid employees and volunteers alike are working diligently so that Linux can maintain a position in the market. The more users, the more questions. The Open Source community makes sure answers keep coming, and watches the quality of the answers with a suspicious eye, resulting in ever more stability and accessibility.
Listing all the available Linux software is beyond the scope of this guide, as there are tens of thousands of packages. Throughout this course we will present you with the most common packages, which are almost all freely available. In order to take away some of the fear of the beginning user, here's a screenshot of one of your most-wanted programs. You can see for yourself that no effort has been spared to make users who are switching from Windows feel at home:


1.2. The user interface

1.2.1. Is Linux difficult?

Whether Linux is difficult to learn depends on the person you're asking. Experienced UNIX users will say no, because Linux is an ideal operating system for power-users and programmers, because it has been and is being developed by such people.
Everything a good programmer can wish for is available: compilers, libraries, development and debugging tools. These packages come with every standard Linux distribution. The C-compiler is included for free, all the documentation and manuals are there, and examples are often included to help you get started in no time. It feels like UNIX and switching between UNIX and Linux is a natural thing.
In the early days of Linux, being an expert was kind of required to start using the system. Those who mastered Linux felt better than the rest of the "lusers" who hadn't seen the light yet. It was common practice to tell a beginning user to "RTFM" (read the manuals). While the manuals were on every system, it was difficult to find the documentation, and even if someone did, explanations were in such technical terms that the new user became easily discouraged from learning the system.
The Linux-using community started to realize that if Linux was ever to be an important player on the operating system market, there had to be some serious changes in the accessibility of the system.

1.2.2. Linux for non-experienced users

Companies such as RedHat, SuSE and Mandrake have sprung up, providing packaged Linux distributions suitable for mass consumption. They integrated a great deal of graphical user interfaces (GUIs), developed by the community, in order to ease management of programs and services. As a Linux user today you have all the means of getting to know your system inside out, but it is no longer necessary to have that knowledge in order to make the system comply to your requests.
Nowadays you can log in graphically and start all required applications without even having to type a single character, while you still have the ability to access the core of the system if needed. Because of its structure, Linux allows a user to grow into the system: it equally fits new and experienced users. New users are not forced to do difficult things, while experienced users are not forced to work in the same way they did when they first started learning Linux.
While development in the service area continues, great things are being done for desktop users, generally considered as the group least likely to know how a system works. Developers of desktop applications are making incredible efforts to make the most beautiful desktops you've ever seen, or to make your Linux machine look just like your former MS Windows or MacIntosh workstation. The latest developments also include 3D acceleration support and support for USB devices, single-click updates of system and packages, and so on. Linux has these, and tries to present all available services in a logical form that ordinary people can understand.
The screenshot below shows how each item in the Channel list (RH 7.2, StarOffice, Opera, Ximian Gnome, Loki games and CodeWeavers) can be updated with one mouse click. Adding or removing software packages or keeping the system up to date is simple with tools like this one, called Red Carpet:

1.1. History

1.1.1. UNIX

In order to understand the popularity of Linux, we need to travel back in time, about 30 years ago...
Imagine computers as big as houses, even stadiums. While the sizes of those computers posed substantial problems, there was one thing that made this even worse: every computer had a different operating system. Software was always customized to serve a specific purpose, and software for one given system didn't run on another system. Being able to work with one system didn't automatically mean that you could work with another. It was difficult, both for the users and the system administrators.
Computers were extremely expensive then, and sacrifices had to be made even after the original purchase just to get the users to understand how they worked. The total cost of IT was enormous.
Technologically the world was not quite that advanced, so they had to live with the size for another decade. In 1969, a team of developers in the Bell Labs laboratories started working on a solution for the software problem, to address these compatibility issues. They developed a new operating system, which was
simple and elegant
written in the C programming language instead of in assembly code
able to recycle code.
The Bell Labs developers named their project "UNIX."
The code recycling features were very important. Until then, all commercially available computer systems were written in a code specifically developed for one system. UNIX on the other hand needed only a small piece of that special code, which is now commonly named the kernel. This kernel is the only piece of code that needs to be adapted for every specific system and forms the base of the UNIX system. The operating system and all other functions were built around this kernel and written in a higher programming language, C. This language was especially developed for creating the UNIX system. Using this new technique, it was much easier to develop an operating system that could run on many different types of hardware.
The software vendors were quick to adapt, since they could sell ten times more software almost effortlessly. Weird new situations came in existence: imagine for instance computers from different vendors communicating in the same network, or users working on different systems without the need for extra education to use another computer. UNIX did a great deal to help users become compatible with different systems.
Throughout the next couple of decades the development of UNIX continued. More things became possible to do and more hardware and software vendors added support for UNIX to their products.
UNIX was initially found only in very large environments with mainframes and minicomputers (note that a PC is a "micro" computer). You had to work at a university, for the government or for large financial corporations in order to get your hands on a UNIX system.
But smaller computers were being developed, and by the end of the 80's, many people had home computers. By that time, there were several versions of UNIX available for the PC architecture, but none of them were truly free.

1.1.2. Linus and Linux

Linus Torvalds, a young man studying computer science at the university of Helsinki, thought it would be a good idea to have some sort of freely available academic version of UNIX, and promptly started to code.
He started to ask questions, looking for answers and solutions that would help him get UNIX on his PC. Below is one of his first posts in comp.os.minix, dating from 1991:
From: torvalds@klaava.Helsinki.FI (Linus Benedict Torvalds)Newsgroups: comp.os.minixSubject: Gcc-1.40 and a posix-questionMessage-ID: <1991jul3.100050.9886@klaava.helsinki.fi>Date: 3 Jul 91 10:00:50 GMTHello netlanders,Due to a project I'm working on (in minix), I'm interested in the posixstandard definition. Could somebody please point me to a (preferably)machine-readable format of the latest posix rules? Ftp-sites would benice.
From the start, it was Linus' goal to have a free system that was completely compliant with the original UNIX. That is why he asked for POSIX standards, POSIX still being the standard for UNIX.
In those days plug-and-play wasn't invented yet, but so many people were interested in having a UNIX system of their own, that this was only a small obstacle. New drivers became available for all kinds of new hardware, at a continuously rising speed. Almost as soon as a new piece of hardware became available, someone bought it and submitted it to the Linux test, as the system was gradually being called, releasing more free code for an ever wider range of hardware. These coders didn't stop at their PC's; every piece of hardware they could find was useful for Linux.
Back then, those people were called "nerds" or "freaks", but it didn't matter to them, as long as the supported hardware list grew longer and longer. Thanks to these people, Linux is now not only ideal to run on new PC's, but is also the system of choice for old and exotic hardware that would be useless if Linux didn't exist.
Two years after Linus' post, there were 12000 Linux users. The project, popular with hobbyists, grew steadily, all the while staying within the bounds of the POSIX standard. All the features of UNIX were added over the next couple of years, resulting in the mature operating system Linux has become today. Linux is a full UNIX clone, fit for use on workstations as well as on middle-range and high-end servers. Today, all the important players on the hard- and software market each have their team of Linux developers; at your local dealer's you can even buy pre-installed Linux systems with official support.

1.1.3. Current application of Linux systems

Today Linux has joined the desktop market. Linux developers concentrated on networking and services in the beginning, and office applications have been the last barrier to be taken down. We don't like to admit that Microsoft is ruling this market, so plenty of alternatives have been started over the last couple of years to make Linux an acceptable choice as a workstation, providing an easy user interface and MS compatible office applications like word processors, spreadsheets, presentations and the like.
On the server side, Linux is well-known as a stable and reliable platform, providing database and trading services for companies like Amazon, the well-known online bookshop, US Post Office, the German army and such. Especially Internet providers and Internet service providers have grown fond of Linux as firewall, proxy- and web server, and you will find a Linux box within reach of every UNIX system administrator who appreciates a comfortable management station. Clusters of Linux machines are used in the creation of movies such as "Titanic", "Shrek" and others. In post offices, they are the nerve centers that route mail and in large search engine, clusters are used to perform internet searches.These are only a few of the thousands of heavy-duty jobs that Linux is performing day-to-day across the world.
It is also worth to note that modern Linux not only runs on workstations, mid- and high-end servers, but also on "gadgets" like PDA's, mobiles, a shipload of embedded applications and even on experimental wristwatches. This makes Linux the only operating system in the world covering such a wide range of hardware.



How to install roundcube on cPanel Server?

To install roundcube on a cPanel or Linux Server, you should know your MySQL root password. Replace your MySQL root password with Database Password.
If you have already used RoundCube installation before then make sure you have removed traces of it.
Follow the steps given below and remove any traces of it with,
cd /usr/local/cpanel/base
rm -rf roundcube*
mysql -p -e ‘drop database roundcube’;
chattr -i /usr/local/cpanel/base/frontend/x/webmaillogin.html
chattr -i /usr/local/cpanel/base/webmaillogin.cgi
/scripts/upcp
You will have to specify your root password when prompted.
Let us begin with installation
A) Download roundcube first from the given sourse and apply the proper permission to directories
cd /usr/local/cpanel/base
wget -O roundcube.tar.gz http://heanet.dl.sourceforge.net/sourceforge/roundcubemail/roundcubemail-0.1-rc1.tar.gz
tar -zxvf roundcube.tar.gz
rm -rf roundcube.tar.gz
mv -f roundcubemail-0.1-rc1 roundcube
cd roundcube
chmod -R 777 temp
chmod -R 777 logs
B) Create the database and install the intial sql file. The following commands will do this for you.
replace the mysql password in place of DATABASEPASSWORD
mysql -e “CREATE DATABASE roundcube;” -pDATABASEPASSWORD
mysql -e “use roundcube; source SQL/mysql.initial.sql;” -pDATABASEPASSWORD
C)Set the configuration as given below
cd config
mv db.inc.php.dist db.inc.php
mv main.inc.php.dist main.inc.php
Now open db.inc.php
nano db.inc.php
Find
$rcmail_config[’db_dsnw’] = ‘mysql://roundcube:pass@localhost/roundcubemail’;
Replace with
$rcmail_config[’db_dsnw’] = ‘mysql://root:DATABASEPASSWORD@localhost/roundcube’;
Now Open main.inc.php
nano main.inc.php
Find
$rcmail_config[’default_host’] = ‘’;
Replace with
$rcmail_config[’default_host’] = ‘localhost’;
D) Configure cPanel to show roundcube in the theme. Please note this is for the X theme(default) only!! If you use another theme please skip the next part and see below.
cd /usr/local/cpanel/base/roundcube/skins/default/images/
cp –reply=yes roundcube_logo.png /usr/local/cpanel/base/frontend/x/images/roundcube_logo.png
cp –reply=yes roundcube_logo.png /usr/local/cpanel/base/webmail/x/images/roundcube_logo.png
cd /usr/local/cpanel/base
wget http://www.hostgeekz.com/files/hostgeekz/HGpatch-roundcube-0.1-rc1
patch -p0

RoundCube——
***UPDATE***
Remember to chattr +i the files or add the patch to your /scripts/upcp.
chattr +i /usr/local/cpanel/base/frontend/x/webmaillogin.html
chattr +i /usr/local/cpanel/base/webmaillogin.cgi



Apache Installation on Linux

Apacahe is a very unique and the most popular web server which is used by different types of websites which is also called as Apache httpd .
Below are the installation steps for Apache. The user who is familiar with changing directories, using tar and gunzip, compiling and has root account of the server can easily install Apache
Installing Apache version 1.3.37
1) SSH to your server as root & download Apache 1.3.37 source from Apache httpd server website, http://httpd.apache.org/
# cd /usr/local/src
# wget http://httpd.apache.org/download.cgi
2) Extract the tar file.
# tar -zxvf apache1.3.tar.gz
# cd apache1.3
3) Now configure the source tree. We are installing apache at /usr/local/apache. Following will create a make file.
# ./configure –prefix=/usr/local/apache \
–enable-so \
–enable-cgi \
–enable-info \
–enable-rewrite \
–enable-speling \
–enable-usertrack \
–enable-deflate \
–enable-ssl \
–enable-mime-magic
You may only enable ‘-so‘. More information regarding the other options, you can try ./configure –help
Setting up Apache server.
1) With the above installation of apache the apache conf file is created at /usr/local/apache/conf/httpd.conf
1) If you want to run Apache on a different port to the default (80) then then change the number on line 280. Ports less than 1023 will require Apache to be started as root. Port 80 is probably the easiest to use since all other ports have to be specified explicitly in the web browser, eg: http://localhost:81.
Port 80
2) You may want to change the server admin email address on line 313:
ServerAdmin admin@example.com
3) Specify your machine name on line 331, you may just have to remove the # comment marker. If you configure virtual hosts as outlined below then Apache will use the virtual server you name here as the default documents for the site.
ServerName www.example.com
4) You should set the document root on line 338:
DocumentRoot /usr/local/apache/htdocs
5) And on line 363:
This should be changed to whatever you set DocumentRoot to.
6) The default file to serve in directories is index.html. You can change this or add new file names (in order or importance) on line 414.
DirectoryIndex index.html index.htm index.php
7) If you don’t get a large number of hits and you want to know where your visitors are from then turn host name look ups on at line 511. Turning this on does place extra load on your server as it has to look up the host name corresponding to the IP address of all your visitors.
HostnameLookups On
8) Apache Errorlog on line 520:
ErrorLog /usr/local/apache/logs/error_log
Setting Up Virtual Hosts:
1) Virtual Hosts in Apache enables the single Apache server to serve different web pages for different domains. Through Virtual Hosts we can configure how Apache should handle requests to each domain.
When any site or domain is browsed in a web browser, the browser sends the hostname of the server that it is connecting to, to the web server. All the HTTP request that come to the server (on the ports it was told to listen to) are caught by Apache. It checks the host name included in the request and uses that to determine the virtual host configuration it should utilize.
2) Whena request is received by Apache, it get following details:
Hostname: The domain name (eg. hostingcomments.com)
IP address: (eg. 10.10.10.1)
Port: (eg. 80)
During Apache configuration, we should mention each IP address and port combination for which we will be specifying virtual host domains, in the configuration file. So we should add the NameVirtualHost entry in the httpd.conf file:
NameVirtualHost 10.10.10.1:80
Please make sure the ipaddress that you use is configured on your machine.
3) Each virtual host will have its own directory for the web pages to be stored. This can be anywhere that the Apache web server has permission to read. For example, on a cPanel server the web pages are located at /home/username/public_html.
Now If we set a domain hostingcomments.com on the, its VirtualHost entry will be:
NameVirtualHost 10.10.10.1:80
ServerAlias www.hostingcomments.com
ServerAdmin webmaster@hostingcomments.com
DocumentRoot /home/hosting/public_html/
ServerName tmcnetwork.bayker.com
BytesLog domlogs/hostingcomments.com-bytes_log
CustomLog /usr/local/apache/domlogs/tmcnetwork.bayker.com combined
Running Apache:
1) apachectl is the easiest way to start and stop your server manually.
# cd /usr/local/apache/bin
# ./apachectl start
2) You can also copy the file /usr/local/apache/bin/httpd to /etc/init.d/ from where your can stop & start apache.
# /etc/init.d/httpd stop
# /etc/init.d/httpd start
4) Now make Apache from the above make file.
# make
5) If make is successful & goes without any error , install Apache
# make install


LINUX FLAVOURS

Different companies have various distributions of linux they differ in add-on software, GUI, basic commands, parameters, price and in others. These companies upgrade there versions in same time frame. Some of linux flavours are as per below.

* Red Hat:- The Red Hat distributions for Intel, Alpha and Sparc are built from the exact same source packages. This is to ensure maximum portability between platforms regardless of the underlying hardware architecture.
It’s also pegged as the top distro in terms of the development, deployment and management of Linux for an internet infrastructure.
Red Hat is also famous for a very easy installation system known as Red Hat Package Management, which effectively allows download and installation of packages with a single command.

* DEBIAN:-Debian Project claims to be “an association of individuals who have made common cause to create a free operating system”. But it has a reputation for being the ‘elite’ users choice in Linux, infamous for its uber techie ‘holier than thou’ user base.
Although, like many Linux variants, Debian is updated and maintained through the work of many users who volunteer their efforts, extensive pre-release testing is done to ensure the highest degree of reliability possible, and a publicly accessible bug tracking system provides an easy way to monitor user feedback.

* SUSE:- SuSE is the leading Linux distro in Europe and the biggest competitor to Red Hat. Known for its easy to use interface, SuSE is also renowned for good customer service, making it a strong player in the enterprise space.
Like Red Hat, SuSE is one of the oldest flavours of Linux. SuSE is also involved in the UnitedLinux project

* MANDRAKE:- MandrakeSoft seized this opportunity to integrate a user-friendly graphical desktop environment as well as to contribute its own graphical configuration utilities.
As a result Mandrake quickly became famous for setting the standard in ease-of-use and functionality and proved that Linux as a server or workstation has no reason to be jealous of any other more established operating systems.

* GENTOO:- Gentoo Linux is a versatile and fast distribution geared towards developers and network professionals. Again, a benefit of Gentoo is its advanced package management system called Portage.
This is a true ports system in the tradition of BSD ports, but is Python-based and sports a number of advanced features including dependencies, fine-grained package management, ‘fake’ installs, sand-boxing, safe un-merging, system profiles, virtual packages, config file management and more. It’s smooth and sleek, but definitely a Linux for power users.

* CALDERA:- OpenLinux product line is a multi-tasking, multi-user Linux-based operating system surrounded with utilities, graphical interfaces, installation procedures, third-party applications, etc.
But, like Turbolinux , the SCO Group - which was recently acquired by Caldera - will now focus on the UnitedLinux project.

* UBUNTU:- Ubuntu is a complete Linux-based operating system, freely available with both community and professional support. It is developed by a large community.Ubuntu is suitable for both desktop and server use. The current Ubuntu release supports PC (Intel x86), 64-bit PC (AMD64), Sun UltraSPARC and T1 (Sun Fire T1000 and T2000), PowerPC (Apple iBook, Powerbook, G4 and G5) and OpenPower (Power5) architectures.

Linux Advantages

Low cost: You don’t need to spend time and money to obtain licenses since Linux and much of it’s software come with the GNU General Public License. You can start to work immediately without worrying that your software may stop working anytime because the free trial version expires.

Stability: Linux doesn’t need to be rebooted periodically to maintain performance levels. It doesn’t freeze up or slow down over time due to memory leaks and such. Continuous up-times of hundreds of days (up to a year or more) are not uncommon.

Performance: Linux provides persistent high performance on workstations and on networks. It can handle unusually large numbers of users simultaneously, and can make old computers sufficiently responsive to be useful again.

Network friendliness: Linux was developed by a group of programmers over the Internet and has therefore strong support for network functionality; client and server systems can be easily set up on any computer running Linux. It can perform tasks such as network backups faster and more reliably than alternative systems.

Flexibility: Linux can be used for high performance server applications, desktop applications, and embedded systems. You can save disk space by only installing the components needed for a particular use. You can restrict the use of specific computers by installing for example only selected office applications instead of the whole suite.

Compatibility: It runs all common Unix software packages and can process all common file formats.

Choice: The large number of Linux distributions gives you a choice. Each distribution is developed and supported by a different organization. You can pick the one you like best; the core functionalities are the same; most software runs on most distributions.

Fast and easy installation: Most Linux distributions come with user-friendly installation and setup programs.
Full use of hard disk: Linux continues work well even when the hard disk is almost full.

Multitasking: Linux is designed to do many things at the same time; e.g., a large printing job in the background won’t slow down your other work.
Security: Linux is one of the most secure operating systems. “Walls” and flexible file access permission systems prevent access by unwanted visitors or viruses.

Open source: If you develop software that requires knowledge or modification of the operating system code, Linux’s source code is at your fingertips.