The world wide web conjures up images of a giant spider web where everything is powering everything else in a random pattern and you can go in edge of the web to another by just following the right links. Theoretically, that’s what makes the web completely different from of typical index system: You can follow hyperlinks in page to another. In the “small world” theory of the web, every web page is considered to be separated from any other dark web sites Web page by an average of about 19 clicks. In 1968, sociologist Stanley Milgram invented small-world theory for social networks by noting that every human was separated from any other human by only six degree of separating. On the web, the tiny world theory was supported by early research on a small trying of web sites. But research conducted together by scientists at IBM, Compaq, and Alta Windows vista found something entirely different. These scientists used a web crawler to identify 200 million Web pages and follow 1. 5 thousand links on these pages.

The specialist discovered that the web was not like a spider web at all, but rather like a ribbon and bow tie. The bow-tie Web had a ” strong connected component” (SCC) composed of about 56 million Web pages. On the right side of the ribbon and bow tie was a collection of forty four million OUT pages that you could get from the center, but could not revisit the center from. OUT pages maintained to be corporate intranet and other web sites pages that are designed to trap you at the site when you land. On the left side of the ribbon and bow tie was a collection of forty four million IN pages where you could get to the center, but that you could not travel to from the center. We were looking at recently created pages that had not yet been related to many centre pages. In addition, 43 million pages were classified as ” tendrils” pages that did not backlink to the center and will not be related to from the center. However, the tendril pages were sometimes related to IN and/or OUT pages. Occasionally, tendrils linked together without passing through the center (these are called “tubes”). Finally, there were 16 million pages totally disconnected from everything.

Further evidence for the non-random and structured nature of the Web is provided in research performed by Albert-Lazlo Barabasi at the University of Notre Dame. Barabasi’s Team found that far from being a random, exponentially exploding network of 50 thousand Web pages, activity on the web was actually highly concentrated in “very-connected super nodes” that provided the on the internet to less well-connected nodes. Barabasi dubbed this type of network a “scale-free” network and found parallels in the growth of cancers, diseases transmission, and computer bacteria. As its turns out, scale-free networks are highly vulnerable to deterioration: Destroy their super nodes and transmission of messages breaks down rapidly. On the upside, if you are a marketer trying to “spread the message” about your products, place your products on one of the super nodes and watch the news spread. Or build super nodes and attract a huge audience.

Thus the picture of the web that emerges from this research is quite completely different from earlier reports. The notion that most twos of web pages are separated by a handful of links, almost always under 20, and that the number of connections would grow exponentially with the size of the web, is not supported. In fact, there is a 75% chance that there is no path in randomly chosen page to another. With this knowledge, it now becomes clear why the most advanced web search engines only index a very small percentage of all web pages, and only about 2% of the overall population of internet hosts(about 400 million). Search engines cannot find most web sites because their pages are not well-connected or linked to the central core of the web. Another important finding is the identification of a “deep web” composed of over 900 thousand web pages are not easy to get at to web crawlers that most search engine companies use. Instead, these pages are either proprietary (not available to crawlers and non-subscribers) like the pages of (the Wall Street Journal) or are not easily available from web pages. In the last few years newer search engines (such as the medical search engine Mammaheath) and older ones such as yahoo have been revised to search the deep web. Because e-commerce revenues in part depend on customers being able to find a web site using search engines, web site leaders need to take the appropriate measures to ensure their web pages are perhaps the connected central core, or “super nodes” of the web. One way to do this is to make sure the site has as many links as possible to and from other relevant sites,

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *