Home Project
Nanodatacenters objective PDF Print E-mail

Historically, content distribution in the Internet has relied on a client-server model. This model has shaped all Internet legacy applications such as the web, electronic mail messaging, and FTP. For the past ten years, we have seen content distribution solutions that have evolved from classical client-server models, through distributed caching, to Content Distribution Networks (CDNs), and more recently peer-to-peer (P2P) networks. NaDa (Nanodatacenters) is the next step in data hosting and in the content distribution paradigm. By enabling a distributed hosting edge infrastructure, NaDa can enable the next generation of interactive services and applications to flourish, complementing existing data centres and reaching a massive number of users in a much more efficient manner.



Increased computational power, combined with advances in data storage and global networking, has made Internet services a critical resource in our everyday life. Data centres (buildings that host large numbers of networked computer servers and power supplies) are often critical enablers of such services. Data centres are known to be a major source of cost and complexity for operators, while they are inherently not scalable due to their centralised nature. As a result, router companies, server manufactures, and hosting facilities hasten to produce more efficient hardware and software for data centres. They also try to improve the efficiency of operation of such components. For instance, operators may dynamically shut down some processes in machines or even entire machines, depending upon the current load. They may also redirect surplus load to other idle machines in the same data centre. While this effort improves efficiency, it is bound to produce rather short-term remedies. Indeed, the entire paradigm of monolithic data centres seems to be challenged, not the specifics of their numerous possible realizations.

Data Center Challenges PDF Print E-mail
A data centre contains primarily electronic equipment used for data processing (servers), data storage (storage equipment), and communications (network equipment). Collectively, this equipment processes, stores, and transmits digital information and is known as “Information Technology” (IT) equipment. Data centres also usually contain specialised power conversion and backup equipment to maintain reliable, high-quality power, as well as environmental control equipment to maintain the proper temperature and humidity for the IT equipment.
P2P challenges PDF Print E-mail
In the last decade, the low cost of computer memory, together with the increasing performance of processors and most importantly the commoditization of broadband access (whether cable or DSL), has allowed the emergence of a new edge content distribution paradigm: peer-to-peer (P2P). With P2P, content is spread onto consumers’ end devices (currently PCs) and each application builds an overlay connecting these devices.
In general, current P2P applications suffer from a number of limitations:
  • Lack of service guarantees due to uncontrolled interference between the different applications, which often results in the poor quality for P2P based applications such as live video streaming or even telephony.
  • Inefficient use of network resources and consequently poor performance (the overlay built by the peers is not optimised taking into account the underlay, i.e. the actual network),
  • They are also designed around selfish user behaviour and free-riding prevention mechanisms, rather than based on well thought out resource scheduling to maximize the performance of the overall system.
  • Absence of security and control making impossible to guarantee the integrity and security of content, and limiting the quality and the diversity of available content.
<< Start < Prev 1 2 Next > End >>

Page 1 of 2