Why major Internet providers have abandoned high-speed tariffs. Testing an ultra-high-speed network and looking for its weak points

All last year major providers suddenly began to introduce tariffs at breathtaking speed. Three hundred megabits per second! Five hundred! Gigabit! And then, little by little, they were abandoned. And now my favorite Online has a maximum of 100 Mbps, Beeline has the same (with one caveat, which is a little lower), and for some reason the still alive Akado has 150 Mbps, although the latter can write anything, everyone believes equally with difficulty.

Why do ISPs cut speed? Options come to mind, from the high cost of investing in network infrastructure to the consequences of sanctions. But in fact, everything is much, much simpler. It turned out, as in a well-known anecdote that I like to tell.

Leonid Ilyich Brezhnev is asked:

- Why is there no meat in the stores?

- We are moving towards communism with leaps and bounds, the cattle are not keeping up with us.

Cattle is the level of technology that we are used to using.

To drag gigabit to the apartment is not a cheap task, but absolutely real. And this speed can be provided. But then the bad stuff starts.

To pump even 500 Mbps, you need a top-end router. Maximum of a year ago. Better yet, fresh. Otherwise, it will overheat and choke. There are very few such routers in the hands of the population. We use such junk that is breathtaking. Does it work? Well, let it work, what to change. Even gigabit ports are still not available everywhere.

To transfer data at such a speed over the air, you need support for 802.11ac / Both by the router, and, importantly, by the end device. Meanwhile, until now, official deliveries of laptops with 802.11ac to Russia have been banned due to the lack of certification by the Ministry of Communications. So only 802.11n and peak 450 Mbps, but in fact - about 300. And even on a few devices imported “in gray”, the peak connection speed rarely exceeded 866 Mbps.

And how did everything happen in practice?

A person with a router, bought five years ago, and a zoo of equipment that does not support 802.11ac, connects a gigabit tariff and notices indignantly that the speed is not the same! More than 300 megabits does not work in any way! Deceived, crooked-handed creatures! Angry messages begin in the forums, letters and calls to technical support. It is impossible to find out the reasons for what is happening without a visit from the master. So they start chasing them. Masters tell users - buy new routers, update equipment. This causes a new wave of indignation - oh, you creatures, not only tear up money every month, but also breed for equipment! Yes, I will sue you!

In general, after three months of such bedlam, providers realized that nerves are more expensive. And they removed high-speed tariffs from sites. Most users, in fact, have nowhere to put 100 megabits. And even 50. So no one really noticed the disappearance of tariffs.

They write to me that small providers are still trying to sell speeds of 400 and even 500 Mbps, but either they have pumped users, or technical support with titanium nerves. MGTS keeps a tariff for 500 megabits in its assortment, but they have nowhere to go, they dragged “gigabits to every home” with such pathos that it is impossible to refuse it. Beeline sells 365 megabits at the most expensive tariff, but only complete with its own router (by the way, very cool - the first implementation of 802.11ac on the Mediatek chipset). In any case, I'm afraid we won't be able to return to the gigabit level in the mass segment for another couple of years.

I recently visited an internet forum where people were discussing their 1 gigabit fiber internet connections. "Lucky them!" I thought. But is it really about luck? If you notice that instead of 1 Gbps you get about 80 Mbps, or even less, the problem may be the wrong Ethernet cable.

In this article, we will tell you how to choose the right Ethernet cable for maximum Internet connection speed.

WiFi vs Ethernet

Let's find out right away that an Ethernet cable provides faster Internet connections than Wi-Fi. Yes, a wireless network is very convenient, but if you want to get the best internet speed, then you should use an Ethernet cable.

Ethernet to the rescue!

Naturally, if you have a wired network and a very fast broadband internet, you do not want to use a 100 Mbps (Fast Ethernet) connection between your computer and your ISP's modem. That would be stupid! You need gigabit internet.

All you need to do is connect all your home devices with inexpensive Cat 6 Ethernet cables, and use cheap gigabit switches as "nodes" to connect your devices.

My home network looks like this:

Pretty simple, right?

The orange line is a Cat 6 Ethernet cable. You simply connect computers, routers, laptops with these cables and everything “just works”.

However, you should note that some laptops come with cheap built-in Fast Ethernet adapters that only offer 100Mbps connection speeds. If you have such a situation with your computer, buy a gigabit USB-ethernet adapter.

But which switches and Ethernet cables should you buy?

This is also a pretty easy question.

For Ethernet switches, you need a quality "gigabit Ethernet switch". We recommend purchasing the 8-port D-Link Gigabit DGS-108, which is great for home use.

This switch is very convenient to use: when you plug in the Ethernet cable and the connector flashes green, then it is running at 1 gigabit. If the indicator is orange, the speed is only 10 or 100 Mbps. Thus, you can determine which Ethernet adapter is used in your computer, as we have already discussed above.

As for Ethernet cables, you just need to make sure you are using Cat 6 (category 6). Ethernet cables usually have a category printed on them, such as:

Please note that there are other types of Ethernet cables such as Cat 5, Cat 5e, Cat 6a, etc. Any cable that is labeled Cat 6 is a great option for our situation (regardless of the letter at the end, if any). Cat 5 Ethernet cables should not be purchased because they are designed to operate on networks less than 1 Gbps.

By the way, connectors on Ethernet cables do not play a special role in the quality and speed of the signal. The four twisted pairs of wires inside the cable play a much greater role. The higher the category, the faster the cable will transfer data. That's why you should use Cat 6 or higher. Cat 6 is for Gigabit Ethernet!

Also, you don't have to worry about shielding if you're buying a pre-made cable. Just make sure it's Cat 6 and full speed ahead!

We have prepared some tips and notes on using Ethernet cables throughout the home:

  • Do not unwind the network cable;
  • Do not pinch the cable in the door;
  • Do not bend the cable at right angles; round it off at the corners.

Cat 6 Ethernet cable is slightly stronger than others because it has a plastic core that accommodates twisted pairs of wires. But you still shouldn't abuse the strength of the cable. The more you squeeze the cable, the more the wires inside will move, and the slower the data transfer rate will be.

With a few simple tips, you can make your home network as fast as possible. A 1 Gbps internet connection is no problem, of course, if your ISP offers such fast broadband.


Gigabit internet at home - and what to do with it? Testing high-speed network and look for its weak points

The Internet is expensive, the speed is low - such complaints will probably always appear. Having assessed both aspects, one can object: the prices in Belarus for access to the network, taking into account all the features, are quite acceptable. But what about the speed?.. For someone, a megabit is enough, for another, even 100 will not be enough. It all depends on the needs, and modern content cannot be called “light”, and its providers do not care too much about the “width” of the channel. As an experiment, we asked the Internet operator Atlant Telecom to provide home gigabit Internet - in order to understand whether the Belarusian needs the notorious 1 Gbps.

What connection speed is considered comfortable? In some countries, the “social” minimum is considered to be 5 megabit internet. Leadership lasts for a long time South Korea with an indicator of 28.6 Mbps, the global average is 7.2 Mbps. For comparison, in Belarus, according to a report by Akamai, the average speed is about 9.7 Mbps, and our country is in the fifth ten of the world ranking, which is a good indicator.

But what is the mythical gigabit Internet? Mythical for a simple user who does not know what a data center is, the Internet of things, big data, and so on. That is, for 95% of Belarusians. In principle, even today it can be available to Belarusians, but telecom operators for some reason do not offer such tariffs, or the offer is limited. Although a few years ago at least one option.

Connection

Until the moment of connection, I used a tariff with a 50-megabit connection for a long time (100 Mbps in peering). The pros and cons of such a connection are familiar to many: torrents are able to take the entire dedicated channel, but IPTV and games do not suffer much at the same time - the speed is enough for everything to work simultaneously.

Actually, the transition to an (even more) high-speed connection consisted in laying a new cable directly from the operator's equipment, replacing the twisted pair cable in the apartment itself and the router - and the speed increased 20 times. Then a few surprises awaited us.

The first presented the popular Speedtest. When trying to measure the connection speed, the camera equipment sent me to the "ban" (due to the peculiarities of the Speedtest algorithms). It took some time to solve the problem - the provider reconfigured the hardware.

Now, when some incredible values ​​appeared on the “speedtest” scoreboard, the time has come for the second surprise: it turns out that not every server in Belarus is able to “raise” that same gigabit. Well, let's try foreign ...

The server refused to measure the speed - either "lay down", or sent it to the "ban"

Only some measurement results are given, and the Yandex service did not want to overclock

Some remote hosts were sensitive to the load, blocking access, but nevertheless, the speed varied from 450-550 Mbps in the USA (Cupertino) to 930 Mbps in the Russian direction, and Europe - Germany, France, Poland - was approximately in the middle .

Synthetic tests are completed, you need to try something close to reality. We are looking for files on the p2p network, and then launch Steam. In the first case, the most popular file allowed us to approach the mark of 41 MB per second. This is not the limit, but the result is indicative - it does not reach the maximum.

In the Valve service, a game with a volume of about 30 GB was selected. Click "Install", but the speed above 330 Mbps (41 MB per second) does not rise. What is that rustling under the table? That's the bottleneck - the hard drive, which has exhausted its capabilities. We select SSD in the settings, and the speed rises to 64 megabytes per second (about 512 Mbps).

Internet is available, no speed

What conclusions can be drawn? In the area of ​​​​responsibility of the operator, everything is in order - gigabit is connected to the router, but then “gags” begin. The main reasons for the reduced speeds are obvious: a hard drive that is unable to provide data recording (a budget SSD may also not be able to cope with the task), the overall performance of the computer, insufficient file upload speed by the source (it can be limited by the remote side programmatically).

If the user has his own router, it is possible that this device will also be a weak link - we are talking about its processor and ports. In addition, in a gigabit device, the Ethernet ports may well be 100-megabit. Well, the seemingly banal reason is the wires. The old or cheap twisted pair cable, which lies in many houses under and above the baseboards, has a 4-core cable, and it will not pull gigabit no matter how hard you beat it. Wireless connections are even more difficult.

“How does it happen? You buy a router with “gigabit” written on it, but this does not mean that this speed will be always and everywhere. Usually we are talking about the speed between LAN ports, but between LAN and WAN ports it may not be. Therefore, operators have recommended and tested models with guaranteed performance.

There is even more marketing in the wireless connection. A simple example: the inscription "300 Mbps" or "1100 Mbps" for Wi-Fi",- gives an example of the head of the department for fixed line velcom Oleg Gavrilov. The cable implements duplex communication with the same performance in each direction.

Wi-Fi works differently, and 1100 Mbps means that the speed will be divided approximately equally. In addition, at speeds greater than 300 Mbps, the parameters for the two ranges are indicated, which are summed up. “And honest manufacturers next to the speed indicator also mark that the data was obtained in laboratory conditions, where there is not a single interference,”- added Oleg.

What else affects the data transfer rate? LAN ports practically do not process information (more precisely, the processor is minimally involved), and WAN turns out to be much more demanding on the performance of the device - the router. Accordingly, the question of price arises - the more powerful the processor, the higher it is even with other "ordinary" characteristics.

“Next is the terminal device: laptop, computer, TV, set-top box. 2017 is in the yard, and gigabit network cards are available in all more or less modern PCs. With other devices, there are nuances, especially if a “frugal” mobile processor is installed in the same laptop.”

Much, if not all, depends on what the user is doing online. When surfing, it will be unrealistic to utilize even part of the same 100 megabits - 5 is enough. If you watch videos, download files, play online games, then 50 Mbit / s is more than enough. But here we are talking not only about the data transfer rate, but also about the capabilities of the same computer and codecs: “Want to watch 4K over the Internet, but it does not go or switches to Full HD? It turns out that the subscriber device simply does not pull such content. Practice has shown that YouTube delivers 4K content without delay (on a tariff of 50 megabits, it was often necessary to wait for download). The same with 8K, but the computer can’t cope with it, showing a slide show.

From a technical point of view, a 50 Mbps channel will be sufficient for streaming 4K content - with correctly built routes. Today in households, as a rule, there is a single consumer of ultra-high definition video - the TV. Yes, and those units. Well, monitors, which are probably more, but the advantages of 4K movies, which you won’t find in the afternoon with fire, are lost on a small diagonal. However, over time, the need for them will form.

Loading - 5%

Based on Internet usage models, even when a gigabit connection is connected, the user behavior pattern will practically not change: you can indulge in tests, download a couple of games, movies, and then return to your usual way of life (“jocks” and organizers of “home networks” are not taken into account ).

Oleg Gavrilov agrees with us: “Now it is no longer fashionable to download “in reserve”. Everything can be viewed online.

Objectively, it is, but even without it, Internet consumption in my case did not increase. Of course, in the first days the traffic showed new records - during the time I used the gigabit channel, I downloaded only 48 GB more than usual. And this is due to enhanced tests. Then the traffic consumption gradually fell to the previous values.

Today, large Belarusian operators providing Internet access are increasingly looking towards GPON technology (unlike Ethernet, this means “optics to the apartment”, and not “optics to the house”). It has more capabilities and, among other things, does not require regular replacement of passive infrastructure when speeds increase.

It is logical to assume that with the spread of 4K and virtual content in Belarus, the need for speeds will also grow. But for now, Belarusians will have to wait.

Massachusetts technological university in May 1993, the world's first online newspaper, The Tech, was opened.

By 2008, the total distribution speed exceeded 172 Gb / s, which accounted for 1/4 of the total traffic of the Moscow traffic exchange point MSK-IX. About 3 thousand requests from clients per second - 10 million per hour, 240 million per day. 40,000 thousand packets per second on the network interface. 15,000 interrupts per second. 1200 processes approximately in the top. Loading on an 8 nuclear machine - 10-12 at peak times. And all the same the part of requests dropped. Failed to serve. Unfortunately to find contemporary meaning peer-to-peer traffic failed, who knows - share in the comments for comparison.

In Ukraine, in August 2005, a regional tracker appeared - torrents.net.ua, the need to create a resource was due to the lack of high-speed and unlimited access to world traffic for most users in Ukraine.

Until September 2008, the tracker was closed to users outside the UA-IX zone, so the number of users grew at a slow pace.

First hosting providers

What about website hosting? At first, there were no hosting providers as such. The websites were hosted on the servers of universities and organizations with a permanent connection to the Internet. Tracking what happened in the period 1991-1995 is now quite problematic. In 1995, the Angelfire service offered free as much as 35 KB of space for user pages, and GeoCities - a whole 1 MB. Read more about the first steps of hosting in the article of the same name "The first steps of hosting", which was published on our blog earlier and is perhaps one of the most complete today.

$200 / month for 200 MB server quota and 3000 MB upload (500 MB minimum tariff plan), and traffic over the limit was paid at the rate of $55 to $27 per GB). You could also connect a "dedicated line" for your site, the tariffs were as follows: 128K - $395 / month, 384K - $799 / month, 1M - $1200 / month. Connecting the "channel" and activating hosting also provided for an installation fee of about one monthly fee. At the end of 2000, the same provider offered unlimited disk space on a traffic-only basis and reduced the cost of traffic to $40 per 20 GB. And already in 2002, he lowered the tariffs to $20, made the traffic "unlimited" and again introduced quota restrictions.

The prices for renting the first dedicated servers in 2000 are also interesting:

A server with 8 GB HDD looks like a real "fossil" nowadays. But what can I say, I personally used until 2004 a PC with an HDD, where the usable quota was about 7 GB. And of course, the $5000+/month fee for 6Mbps to the server looks creepy right now. The price was later reduced to $300/Mbps, but still it was not a little.

It goes without saying that the reduction in prices for connectivity and the cost of Internet access took place due to an increase in the number of subscribers and the construction of new communication channels, including underwater optical highways. When faced with all the complexity of laying cables on the ocean floor and find out the approximate cost of the project, it becomes clear why 1 Mbps across the Atlantic could cost $ 300 / month and even more. You can read more about the history of the development of backbone underwater Internet networks in our article:

In Ukraine and the Russian Federation, the process of hosting your own sites began, perhaps, with the free hosting narod.ru from Yandex, in 2000:

There was also a similar project from mail.ru - boom.ru, but this free hosting has not received such distribution as Narod. Subsequently, the free hosting of "Yandex" was absorbed by the most successful free website builder and hosting of 2008-2010 - "uCoz" and the opportunity to build a website using "uCoz" tools is now available for the narod.ru domain. "Yandex" abandoned the "People" due to development social networks and a decrease in interest in the service of building your own sites.

Until 2002, it was profitable to host your own servers in Ukraine only with home network providers, although most kept their servers in offices and even at home due to very expensive traffic for the collocation service, although this violated the terms of service for home subscribers. Many people simply preferred to use ordinary stationary computers for these purposes and not spend money on “server” hardware. Such aksakals are found even today. But if then it was possible to understand why you want to make yourself a “hosting” at home, now it is difficult to understand. And it's not about people who like to do tests of something and for this they need a server at home.

The situation abroad was better, because there the Internet became available to the population earlier and the development process was started earlier. The Netherlands is gradually becoming a "mecca" for hosting servers, as it offers a good geographical location, which means connectivity with many operators, low electricity prices, loyal legislation that contributes to the growth of the IT sector.

So, in 1997, two commercial airline pilots decided to start a company that helped other companies get a presence on the Internet by creating an Internet directory, as well as providing services for creating and hosting websites and connecting to the Internet. The Internet Archive preserved a 1998 version of the website, which, however, did not contain anything other than a contact:

Although, as we can see, there was another move - the amount of included RAM became much less by default :)

At the same time in Ukraine, one of the largest cable providers of Internet and television, Volia, realized that building its own data center was a huge necessity. Since home Internet subscribers mainly “pump” traffic, while the outgoing channel remains practically free and unused. And these are hundreds of megabits, which could well be sold by placing subscribers' dedicated servers. Plus, it is possible to save a lot, since a lot of subscribers could use the resources hosted in the data center, instead of downloading from expensive foreign servers.

This is how the Volya data center arose, which already in 2006 offered the following conditions:

In fact, offering Ukrainian traffic without taking into account, with payment for the consumed foreign traffic. It is noteworthy that incoming foreign traffic cost an order of magnitude more than outgoing traffic, which is understandable, because it was used by home Internet subscribers. Plus, as a rule, for servers that generate traffic, request traffic is small and ranges from 2 to 30% of outgoing traffic, depending on the type of resources that are hosted on the server.

So, if these are web pages with a large number of elements, then the amount of request traffic is higher, since there is confirmation of the successful loading of each of the elements, which leads to an increase in the growth of traffic entering the server. Also, incoming traffic can be generated by subscribers when they upload something to the server. When downloading files, the % of incoming traffic is negligible and is less than 5% of the outgoing in most cases.

It is also interesting that placing your own server in the Volya data center is simply unprofitable, since the cost is the same as when renting. In fact, the Volya data center offers servers of various classes for rent for free, depending on the chosen tariff plan.

Why servers can be free? The answer is very simple. The equipment is standardized, purchased in large quantities. In fact, in this version, everything is easier to maintain, easier to administer, automate, less man-hours are required. When placing subscriber servers on colo, a number of problems arise, ranging from the fact that the server may not be standard and cannot fit in the rack, you will have to allocate more units for placement than originally planned, or refuse the subscriber, referring to a non-standard case, ending the fact that you need to allow the subscriber to the site, provide the opportunity to conduct physical work with the server, store spare parts on site and allow engineers to replace if necessary.

Thus, “colo” turns out to be more expensive to maintain and it makes no sense to provide it at lower rates for the data center.

In Russia, at this time, data centers went further and began to offer conditionally unlimited traffic for free. For example, Agave offers the following conditions:

Incoming and outgoing traffic is unlimited and completely free. The following conditions must be met:

Incoming traffic must not exceed 1/4 from outgoing.
Outgoing foreign traffic should not be larger than the outgoing Russian.
Note: traffic is divided into Russian and foreign geographically.
Failure to comply with these conditions is paid at the following rates:

Exceeding the incoming by 1/4 of the outgoing is paid at the rate of 30 rubles / GB.
The excess of outgoing foreign over outgoing Russian is paid at the rate of 8.7 rubles / GB


It is noteworthy that for ease of accounting in the data center, they did not bother with a list of networks that did not belong to MSK-IX, SPB-IX (traffic exchange points in Moscow and St. Petersburg), which, moreover, had long been interconnected to ensure good connectivity of St. Petersburg with M9 or M10 (traffic exchange points in Moscow) and vice versa. Since the Internet was still not widespread in the regions and the percentage of traffic was really small, especially on leased lines. What can I say, Norilsk received its own fiber only in September 2017, only this year, and became the last major Russian city to receive its own fiber! The project cost was about 40 million US dollars, the total length of the FOCL from Novy Urengoy is 986 km, the bandwidth is 40 Gbit / s with the possibility of expanding to 80 Gbit / s in the future.

It's funny to see how, in 2017, some people are able to enjoy the high-speed Internet that was available to most of us over 10 years ago:

Well, firstly, I already started watching youtube, before that I watched it once a year when I went to Moscow. I opened it right like this ... and I could not be pulled out, and if I also went into the torrent and downloaded something, then this is generally ... And now I can calmly watch. Let's say a video comes out, I watched it once a week and I don't need to watch this whole set of information at a time. And I can Skype with people! It's generally straight! I go like this and shoot: “I’m going guys, look it’s winter!”, The only negative is that the iPhone is cut down in the cold.

You can watch a detailed video about the FOLS project here: part 1 , part 2 , part 3 , part 4. The only thing, keep in mind that the journalists made inaccuracies, the same satellite channel, according to them, was only 1 Gb / s per city, although in fact the total traffic before the introduction of FOCL was about 2.5 Gb / s. Well, you need to understand that the problem was not only in speed, but how much in high ping, which was obtained when using satellite Internet to which it came back again at the time of the FOCL accident.

At the end of 2006, the first sites with online movies, file hosting and other similar resources appeared in the Russian Federation, and in order to reduce the cost of foreign traffic, since Ukrainian traffic can be impressive and do not fit into the ratios prescribed by the same Agava, some of the servers are large projects they try to place in data centers with a connection to UA-IX or create additional Russian traffic artificially, using torrent "s that were distributed exclusively to Russian users, and in some cases file hosting services that were available exclusively for Russian IP addresses. As a result, if in Ukraine, I wanted to download fully and at a good speed, many of the users bought a Russian VPN, since the speed on the same ifolder.ru was always higher from the Russian Federation:

File hosting services, despite the popularity of torrent, are gaining explosive popularity, since the download speed from them is often much higher than when using torrent, while you do not need to distribute and maintain a rating (when you give more than you download, or at least not more than 3 times less). It’s all to blame for the asymmetric DSL channel, when the upload speed was significantly lower than the receive speed (10 times or more), and we must not forget that not every user wanted to “seed” and store many files on their computer.

So, Wnet paid the subscriber for Ukrainian traffic at the rate of $1 per GB, while foreign traffic cost the subscriber $10 per GB, provided that the ratio of outgoing traffic to incoming traffic was 4/1. Of course - it was still a significant price, because the traffic turned out to be free only if there was 10 times more Ukrainian traffic. Thus, in order to generate 9 Mbps for free abroad, it was necessary to generate 90 Mbps to Ukraine. Which was completely different from Agave's proposal, where it was enough that foreign traffic did not exceed Russian.

Therefore, the previously considered offer from the Volya data center was much more profitable than the offer from Wnet, which, moreover, on October 1, 2006 decided to withdraw from the Ukrainian traffic exchange point UA-IX, since UA-IX refused to sell more ports that Wnet needed, possibly as a result of a “peer war”, namely lobbying for the interests of other providers that Wnet began to compete with, or maybe due to the lack of those. the possibility of providing additional ports, or maybe because "Wnet" violated the agreement and built peer-to-peer inclusions with other participants in the exchange point (a feature of the exchange rules

Thanks to this, in 2008 Volya already had 20 Gbit / s connection to UA-IX and 4 Gbit / s to the world from several backbone operators. Further development The hosting services market can be traced already in our history:

Since we started providing hosting services in 2006 among the users of our resource, and since July 2009 we have separated the services into a separate project - ua-hosting.com.ua, which in the future went international and completely moved abroad and is now known under the ua-hosting.company brand and is available via the short domain http://ua.hosting .

It is worth noting that over the past 10 years the market has undergone huge changes, and the reason for this is not only a significant reduction in the cost of main channels, but the redistribution of the audience among many projects due to the closure of once popular projects. Successful resources such as file hosting, which used to be the top traffic in the Alexa rankings, have fallen into oblivion, for many reasons, but mainly because of the ongoing war with copyright holders.

So in Ukraine, the once famous ex.ua, which generates over 15% of all traffic of the Ukrainian UA-IX exchange point (in fact, the traffic exchange point in Kiev, since regional operators were rarely represented, especially with the advent of the Giganet and DTEL-IX exchange point) was closed after the closure of the equally famous fs.to, which at one time bought 100 Gbit / s in the Netherlands from us. And the case with the once famous megauload was even more resonant when over 600 servers of this file hosting service were seized from the data center in the Netherlands where we are located. Rutracker was blocked on the territory of the Russian Federation by Roskomnadzor, and torrents.net.ua ceased to exist in Ukraine due to fear of reprisals.

The audience went to Youtube, Instagram and other social networks. networks. Sites for an adult audience, perhaps, have not lost popularity, only now earnings on teaser advertising for our webmasters from the Russian Federation and Ukraine have lost all meaning due to advertising prices and pay for foreign channels, the price of which, by the way, has decreased significantly in comparison even with 2012, when it seemed that it could not be cheaper, it became quite problematic.

The situation in the market of trunk channels, which determines the relative cost of traffic delivery

As we could understand, having read the information presented above, the price of Internet traffic depends on where the traffic needs to be delivered, how popular this direction is, at what speed it is necessary to transfer data to the stream and with what delay. And also the price will depend on which communication channels the traffic will pass through, which determines how direct the route is and what priority the traffic will have in one case or another, which in turn will determine the final value of latency (ping) from one point to another.

For example, 10 Gbit / s from Novy Urengoy to Norilsk will obviously cost not $2,000 / month and not even $6,000 / month, since over $40 million was invested in the construction of FOCL. 40 Gb/s is 40/15/12 = $0.22 million or $55,000/month for 10 Gb/s, and this is not yet a channel to the Internet, but only the cost of delivering traffic over a high-quality FOCL between two remote settlements. And this money needs to be taken now from the population of Norilsk, which will watch the same Youtube (traffic to which will cost even more, since it will be necessary to pay for highways for delivery up to Youtube networks), which means that traffic from it will be quite expensive and the activity of the population there will be restrained by this price. There is an option when Youtube may want to be “closer” to its users and will want to pay part of the channel costs to them instead of them, in which case the cost of access to the Youtube resource for the population of Norilsk may become lower. This example clearly demonstrates what the price of access to specific Internet resources can consist of. Someone always pays for your traffic, and if it's not you, it's either advertisers and resources that generate this traffic, or a backbone provider or just an Internet provider that benefits from traffic from this direction (say, in order to get discounts on other directions or any tax incentives, which can be beneficial in the case of Norilsk or simply because a fairly wide channel was bought to get a discount on traffic delivery and it is idle).

Tier I backbone operators, such as Cogent, Telia, Level 3, Tata and others, differ in that they take money for traffic delivery from everyone who is connected to them, so traffic generators try to exchange traffic with providers where their audience is located directly. Thus, situations arise when so-called peer-to-peer wars are generated, including between the first-tier backbone operators and large generators, when priority is given to specific consumers, while for others the price of cooperation may be artificially inflated in order to crush a competitor, or but simply for the purpose of enrichment, since the traffic generator simply has no other options. Therefore, disputes arose very often, including litigation, as some companies did not maintain net neutrality and tried to do it very covertly.

So the dispute between Cogent and Google regarding IPv6 traffic has not yet been resolved, which is why it is simply impossible to make peering between companies for direct exchange. Cogent demands money from Google for traffic on their network, while Google wants to feast for free, since the mass of Cogent subscribers (data centers, home Internet providers) are active consumers of traffic from Google networks, although by the way, IPv4, not IPv6, which would reduce the delay and reduce the cost of traffic for these subscribers, subject to an increase in % of IPv6 traffic. But this is apparently unprofitable for Cogent, since it is a Tier I backbone provider, and external traffic from its networks is paid for by second-tier backbone providers (they pay backbone providers of Tier I and receive profit from third-tier providers) and even third-tier providers (pay second-tier providers and receive money from end customers).

In order to understand what constitutes the final price of traffic for a resource, consider the situation using an example popular service Cloudflare, the essence of which is to make websites “closer” to their audience, help reduce the load on the infrastructure by caching static information and filter out possible DDOS attacks.

Of course, Cloudflare hosts servers in all regions where there is a demand for traffic, that is, almost all over the world. And in order to save on traffic, it tries to enter into peering agreements with regional providers that are able to deliver traffic from Cloudflare to users for free, bypassing expensive Tier I backbone operators that charge for traffic in any cases. Why do local providers benefit from this? With significant amounts of traffic, they need to pay, like Cloudflare, Tier I level operators significant funds for delivering traffic, it is much more profitable to connect your channel “directly” (invest once in construction) and receive traffic for free, rather than paying a lot of money monthly to the backbone operator. Even in cases where direct peering is not possible, it can be more profitable to connect through the networks of other transit providers, where the cost of traffic will be much lower than the cost of traffic when transmitting through Tier I. Yes, the route becomes not very direct, ping may increase slightly, transmission speed may drop slightly per stream, but the quality may still be acceptable to realize such savings.

But it is not always possible to conclude peering agreements, yet in some regions Cloudflare is forced to buy a fairly large% of connectivity from backbone providers, and the price of traffic varies greatly depending on the region. Unlike some cloud services, such as Amazon Web Services (AWS) or traditional CDNs, which often pay per terabytes of traffic, Cloudflare pays for the maximum usage of the channel for a certain period of time (the so-called "traffic flow"), based on the maximum number of megabits per second that are used during the month by any of the backbone providers. This accounting method is called burstable, and a special case is called 95th percentile. The 95th percentile is a technique used to provide flexibility and batch bandwidth usage. This allows the consumer of services to exceed the bandwidth set by the tariff by 5% of the total time of using the channel, without increasing the cost. For example, if your tariff assumes the use of a bandwidth of 5 Mbps, then the bandwidth limit can be exceeded by 36 hours every month (5% of 30 days). Bandwidth usage is measured and recorded every 5 minutes for a month, as an average over that small five-minute period. The bandwidth used in each time interval is measured by dividing the amount of data transmitted per interval by 300 seconds (the duration of the specified interval). At the end of the month, 5% of the maximum values ​​are removed, and then the maximum number is selected from the remaining 95%, and it is this value that is used to calculate the payment for the channel width.

There is a legend that in early days Since its inception, Google has used 95th percentile contracts to index at very high bandwidth for one 24-hour period, and the rest of the time the traffic intensity was much lower, thereby providing significant savings on the cost of consumed channels. A smart, but certainly not very durable strategy, since later on we still had to build our own data centers and even channels in order to index resources more often and pay less for intercontinental transport traffic.

Another "subtlety" is that you usually pay backbone providers for the prevailing traffic (inbound or outbound), which in the case of CloudFlare allows you to completely not pay for incoming traffic. After all, CloudFlare is a caching proxy service, as a result of which the output (out) usually exceeds the input (in) by about 4-5 times. Therefore, bandwidth is billed exclusively according to the values ​​of outgoing traffic, which allows you not to pay for the entrance completely. For the same reason, the service does not charge additional fee when the site comes under a DDOS attack. The attack will certainly increase the consumption of incoming traffic, but if the attack is not very large, incoming traffic will still not exceed outgoing traffic and, therefore, this will not increase the cost of the channels used.

Most of the peer-to-peer traffic is usually free, which is not the case with the traffic from the Netflix service, which, after a long debate, had to pay Verizon and Comcast for peer-to-peer inclusions in order to provide acceptable video streaming for users from their networks.

In the diagram above, we can see how Cloudflare's free peering inclusions have grown over the course of 3 months, with both IPv4 and IPv6 versions of the Internet Protocol. And below, also within 3 months, we can observe the global growth of Cloudflare peer-to-peer traffic, which currently uses over 3,000 peer-to-peer connections and saves about 45% of the cost of expensive backbone transit traffic.

Cloudflare does not disclose exactly how much it pays for backbone transit traffic, however, it provides comparative values ​​from different regions from which one can draw an approximate conclusion about the amount of costs.

Consider North America first. Let's assume that we take a mixed average of $10 per Mbps per month across all transit providers as our benchmark in North America. In fact, the payment will be less than this amount and will depend on the volumes, as well as on the selected backbone operator, but it can serve as a benchmark for comparing costs with other regions. If we accept this figure, then each 1 Gb / s will cost $ 10,000 per month (again, do not forget that this value is higher than the real one and is typical, as a rule, in the case of retail purchase, this is just a benchmark that allows you to understand the difference).

Effective price per throughput in the region will be a mixed price of the cost of transit ($10 per Mbps) and peer-to-peer traffic ($0 per Mbps). Each byte transmitted over the peering is a potential transit byte, for which you do not need to pay. While North America has one of the lowest transit prices in the world, it also has lower average peering rates. The graph below shows the ratio between peer-to-peer and transit traffic in a region. And while it has gotten better for Cloudflare over the past three months, North America still lags behind every other region in the world in terms of peering.

Thus, while the percentage of peer-to-peer traffic at Cloudflare exceeds 45% globally, in the North American region it is only 20-25%, which makes the effective cost of 1 Mbps without discounts equal to $7.5-8 per Mbps As a result, North America is the second region in the world ranking of regions with the cheapest traffic. But where is the cheapest traffic?

We have already considered Europe, where for a very long time, in view of the historical concentration of the population in certain regions, there are a lot of exchange points, it is due to this that it is possible to get a larger% of peer-to-peer traffic, and, as a result, the cheapest traffic in the world, since % transit traffic from the same Cloudflare is at the level of 45-50%.

As a result, our reference cost drops to $4.5-5 per Mbps or less. % of peer-to-peer traffic depends directly on the number of participants in the largest exchange points in Europe - AMS-IX in Amsterdam, DE-CIX in Frankfurt and LINX in London. In Europe, traffic exchange points are mainly supported non-profit organizations, while in the US, exchange points are mostly commercial, the same Equinix in New York, which significantly affects the number of participants in these exchange points and, as a result, peerings who are ready to sign peering agreements. For comparison, in Amsterdam, according to statistics for 2014, there are about 1200 participants, while in the USA there are only 400.

The ratio of peer-to-peer traffic to transit traffic in the Asian region is approximately the same as in Europe, Cloudflare shows a graph with a value close to 50-55%. However, the cost of transit traffic is 6-7 times higher compared to the reference cost of traffic in Europe and is as much as $70 per Mbps. Thus, the effective cost of traffic is in the range of $28-32 per Mbps, which is 6-7 times higher than in Europe.

In general, the cost of transit traffic in the Asian region is higher due to several factors. The main one is that there is less competition in the region, as well as more monopoly providers. Second, the Internet services market is less mature. And finally, if you look at the map of Asia, you will see a lot of things - water, mountains, hard-to-reach sparsely populated regions. Running submarine cables is much more expensive than laying fiber optic cable in the most remote region, although it is also not cheap, so the cost of intercontinental transit and the cost of transit through remote areas is offset by the cost of transit within the rest of the continent, which is artificially inflated to cover costs on external and internal "remote" connectivity.

Latin America became a new region in which Cloudflare had a presence and already within 3 months after that, the percentage of peer-to-peer traffic increased from 0 to 60%.

However, the cost of transit traffic, as in Asia, is very high. Transit traffic costs 17 times more than transit traffic in North America or Europe, and the effective cost is $68 per Mbps, which is twice as high as in the Asian region, despite the fact that the percentage of peer-to-peer traffic is one of the best here in the world. Problem Latin America also in the fact that in many countries there are no data centers with a “carrier neutral” policy, when participants can freely switch and exchange traffic with each other. Brazil turned out to be the most developed in this regard, and after Cloudflare built its own data center in Sao Paulo, the number of peerings increased significantly, which made it possible to reach 60% of the exchange traffic value.

The most expensive for Cloudflare is, perhaps, the traffic in Australia, as it involves great amount underwater highways for traffic delivery. And although the percentage of peer-to-peer traffic within the region reaches 50%, Telstra, a monopoly operator in the Australian telecommunications market, does not allow reducing the cost of transit traffic within the country below $200 per Mbit / s due to the large distribution of the population in the region, which is 20 times higher than the reference value in Europe or USA. As a result, the effective cost of traffic in 2015 is at $100 per Mbps and is one of the most expensive in the world. And transit traffic costs are about the same amount that Cloudflare spends on traffic in Europe, although the population of Australia is 33 times smaller (22 million versus 750 million in Europe).

Interestingly, in Africa, despite the high cost of transit traffic - around $140 per Mbps, Cloudflare managed to negotiate peering with 90% of providers, as a result of which the effective cost of traffic was at the level of $14 per Mbps. Thanks to this, websites from London, Paris and Marseille began to open quite quickly, and the presence in West Africa was increased, providing faster access to European resources for the inhabitants of the same Nigeria, where there are about 100 million Internet users. And in the Middle East region, the percentage of peer-to-peer traffic even reached 100%, which made it the cheapest for CloudFlare in the world, if you do not take into account the costs of building and maintaining data centers.

Already after 2 years, at the end of 2016, prices in the most expensive region - Australia and Oceania, became 15% lower, which made it possible to obtain a traffic price of $ 85 per Mbps. Thus, in the case of Cloudflare, the statistics became something like this:

It is interesting that today there are 6 most expensive backbone providers - HiNet, Korea Telecom, Optus, Telecom Argentina, Telefonica, Telstra, traffic from which is much more expensive for Cloudflare than traffic from other connectivity providers around the world, and which they refuse to discuss lower transit prices. For the same Cloudflare, the total traffic to these 6 networks is less than 6% of the total consumption, but almost 50% of the funds allocated to pay for the total connectivity accounted for paying for traffic from these 6 most expensive networks. Of course, this could not go on forever and Cloudflare decided to redirect the traffic of its “free” users to more remote data centers (Singapore or Los Angeles), instead of having a presence in Australia and New Zealand, where the cost of external channels is more reasonable, in as a result, ironically, the same Telstra began to pay more, as their underwater highway was more busy after this transition, which may serve as a good signal for lower prices for resources such as Cloudflare in the region.

Absolute values ​​of the cost of traffic delivery from transit providers in different regions

Telegeography is a wonderful resource, it helps to see not only the growth of submarine backbone networks, but also what prices are offered on average in the market for traffic delivery, calculating the median when using transit backbone providers in a given region. In order to show you the current order of prices, I had to order a small report from them, as a result of data analysis in which the following results were obtained.

As you can see, the cost of the Dedicated Internet Access (DIA) service is at the maximum level in Mumbai, Sao Paulo and Caracas at the moment. While in Europe and the US, this cost is at the level of $6 and $8 per Mbps, respectively.

In accordance with this, a logical question arises, how, at such prices, it is possible to rent servers at the present time, say, configurations of 2 x Intel Dodeca-Core Xeon E5-2650 v4 128GB DDR4 6 x 480GB SSD with a 1Gbps channel and a traffic limit of 100 TV at a price of $249/month, which is equivalent to 300+ Mbps of real consumption, if the cost of 1 Mbps is at the level of $6/month per megabit on average, based on the data of the current report?

How can traffic in data centers be sold cheaper?

Large data centers, such as EvoSwitch in Amsterdam, where we are located, have their own developed backbone network of fibers, which allows significant savings on the delivery of traffic to certain regions, plus increases the number of possible peerings. In 2017, our partner became the owner of one of the largest and highest quality networks.

As we can see from the illustration, the total network capacity has reached 5.5 Tbps, 36 points of presence have appeared around the world, over 2000 peering connections, inclusions in 25 traffic exchange points directly. All this, of course, affects the effective cost of traffic, which, as we remember, is the sum of the cost of paid backbone transit connectivity and free peer-to-peer connectivity, and can also be reduced by charging fees for peer-to-peer connections from the traffic consumer provider. That is, traffic can be paid not only by the traffic generator, but also by the recipient - the provider to whose network this traffic is generated and who is interested in organizing peer-to-peer inclusion in order to pay less backbone providers and save on traffic in the same way that data saves -center. Among other things, the data center almost always has the opportunity to sell an excess “incoming” Internet channel to home Internet users who basically need just such traffic to access the Internet and which, in fact, is unused in most data centers.

Nevertheless, even such an extensive network does not allow making the cost of traffic infinitely low. Therefore, no matter what guaranteed conditions for traffic the data center offers, you need to understand that the final low cost for traffic is achieved by selling a band with a reasonable degree of oversell "a, that is, selling more connectivity than it actually is, but with a hard taking into account the real needs of data center users in traffic in a guaranteed case, when each user is provided with his guaranteed bandwidth at the right time for him.Moreover, you can save on traffic the more, the more users are served and the more, the more peering and trunk channels are connected to the network.

Consider an example. 10 users need a guaranteed 100Mbps link to their servers, however, the connectivity is not always 100% and quite often not at the same time. After analyzing the real consumption, it turns out that at the same time all ten users consume no more than 300 Mbps of traffic in peaks and purchase 1 Gbps of the dedicated bandwidth, and taking into account the reservation - 2 Gbps from different operators and charge each user for the dedicated channel in full volume (essentially double) becomes unreasonable. It is much more reasonable to buy three times less - 700 Mbit / s of traffic, if the purchase is carried out from two independent backbone operators, which will help provide a dedicated channel of 100 Mbit / s for each of the 10 clients with the specified consumption level, and even with a double degree of fault tolerance, plus, there will even be about 100 Mbps left for “growth” in case someone starts to increase traffic consumption, which will give time to connect additional channels. If the traffic is already supplied by three independent backbone providers, then the purchase becomes even more profitable, since it is enough to buy only 500 Mbps of bandwidth, or even less, because at the same time, with a high degree of probability, only one channel out of three can fail - no more than 166 Mbps / s connectivity when there is a need for a maximum of 300 Mbps. Thus, we can easily get 334 Mbps of bandwidth at any time, which will be sufficient for the needs of our subscribers, even if one of the uplinks fails.

In reality, the situation is much simpler and the degree of fault tolerance and redundancy is higher, since clients with a 100 Mbit / s channel are often not ten, but tens of thousands. And most of the traffic uses very little. So 1000 servers with a channel of 100 Mbit / s, excluding according to our statistics, consume on average only 10-15 Gbit / s in peaks, or even less, which is equivalent to 10-15% of the bandwidth allocated to them. At the same time, everyone is provided with the opportunity to consume 100 Mbit / s at the time they need without any consideration, and there are a great many backbone operators from which channels are established. Of course, there are even more peer-to-peer connections, which makes connectivity often more inexpensive and of higher quality and eliminates the possibility of losing a huge part of connectivity at once. Due to this, the required% allocated to ensure fault tolerance is reduced from 50 to 5% or less. Of course, there are clients who upload their channels "on the shelf", but there are also those who consume extremely little traffic, while renting a dedicated server with 100 Mbps channel without taking into account, because it's so convenient - you don't need to be afraid of an expensive fee for exceeding or simply subscribers do not understand how much traffic they really need and how to calculate it. In fact, those users who do not consume the entire bandwidth allocated to them pay for the traffic of users who use the channel to the full.

Among other things, you also need to remember about the daily distribution of traffic to Internet projects, which also affects the cost reduction. Since, if you have a channel load of 100% in the evening, at the time of maximum visits to your resource, then the rest of the day the channel load will most likely be much lower than 100%, up to 10-20% at night and the free channel can be used for other needs (we do not consider generating traffic to another region, since in this case, an expensive payment for transport will most likely arise). Otherwise, during peak hours, visitors will begin to experience problems, leave the website, and traffic will inevitably fall due to the deterioration of behavioral factors and the decrease in the position of the resource in the search results, if the project traffic is mainly search.

In the case of gigabit inclusions, of course, the % of channel utilization was above 10-15% in the initial period of the offer and could reach up to 50% or more, since such servers were previously ordered by traffic generator subscribers when they lacked 100 Mbps of the port , and a gigabit port was much more expensive and it made no sense for ordinary users to pay for it when there was no real need for it. These days, when it's possible to get 1 Gb/s and even 10 Gb/s at home, and the difference between the cost of a switch that supports 1 Gb/s and 100 Mb/s has become insignificant, it turns out to be much more profitable to give everyone access to channel 1 Gb / s, even when it really does not need it, rather than limit the bandwidth. Only in order for the client to download the amount of information he needs as quickly as possible and, as a result, free up the bandwidth for the next subscriber much more quickly in cases where he does not need to constantly generate traffic. That is why the percentage of traffic utilization for servers with a 1Gbps channel and a 100TB limit turned out to be much less than 10% in reality, since most users, of course, do not need such a channel all the time and release the channel 10 times faster for use by subsequent subscribers.

Applying this principle of providing Internet channels, there is a clear need to monitor traffic consumption in individual segments of the data center network and even in each rack, so that as soon as someone's need for a channel increases and the traffic reserve starts to decrease, an additional channel can be added, thereby provide a guaranteed "no limit" for everyone. In general, thanks to this approach, significant funds are saved on paying for external communication channels and it becomes possible to offer prices an order of magnitude lower than without applying this principle and even make money on traffic. After all, a data center cannot sell traffic at the actual cost, but is simply obliged to earn money, as it spends time and money on network maintenance, and a “healthy” business must be profitable.

Therefore, the oversell "a coefficient is present everywhere, to a greater or lesser extent, even when selling offers with a 10 Gbps Unmetered channel to dedicated servers, which, it would seem, should have consumed traffic in full. But the reality turned out to be different. We once sold over 50 dedicated servers with 10 Gbps Unmetered connections to each, but our total traffic generated barely exceeded 250 Gbps, despite the fact that this channel was used by another 900+ dedicated servers with 100 Mbps and 1 Gbps connections to each server Because of this, we were able to provide servers with a guaranteed 10 Gb / s channel at an incredible price of $ 3000 / month, and in the future - almost 2 times cheaper (starting from $ 1800 US). We were the first to sell connectivity for such low prices and that is why we managed to generate so much traffic and get a lot of satisfied customers.

Today we are ready to go even further, thanks to cooperation with the backbone operator of the I level Cogent, we have the opportunity to sell excess connectivity in certain segments of their network in the Netherlands and the USA even cheaper - from $1199 for a server with a channel of 10 Gbit / s excluding and from $4999 per server with 40 Gbit/s Unmetered channel.

https://ua-hosting.company/serversnl - you can place an order here, if you need a location in the USA - open a request in a ticket. However, the Netherlands is the optimal location for our regions in terms of connectivity.

2 x Xeon E5-2650 / 128GB / 8x512GB SSD / 10Gbps - $1199
2 x Xeon E5-2650 / 256GB / 24x512GB SSD / 10Gbps - $2099
2 x Xeon E5-2650 / 256GB / 24x1TB SSD / 10Gbps - $3599
2 x Xeon E5-2650 / 256GB / 24x2TB SSD / 10Gbps - $6599

2 x Xeon E5-2650 / 256GB / 8x1TB SSD / 20Gbps - $1999
2 x Xeon E5-2650 / 256GB / 24x512GB SSD / 20Gbps - $2999
2 x Xeon E5-2650 / 256GB / 24x1TB SSD / 20Gbps - $4599
2 x Xeon E5-2650 / 256GB / 24x2TB SSD / 20Gbps - $7599

2 x Xeon E5-2650 / 256GB / 24x512GB SSD / 40Gbps - $4999
2 x Xeon E5-2650 / 256GB / 24x1TB SSD / 40Gbps - $5599
2 x Xeon E5-2650 / 256GB / 24x2TB SSD / 40Gbps - $8599

In individual cases, we are able to offer adequate prices for 100Gbps Unmetered connections, please contact us if you need such connectivity.

Of course, Cogent and I expect that you will not consume all the bandwidth allocated to you, and the proposed configurations also contribute to this. In the case of their use with a RAID controller, it is very problematic to consume more than 6 Gb / s of traffic, since we get a "bottleneck" in the form of a controller. However, if the drives are used independently, it becomes possible to distribute the traffic in an optimal way. In any case, we ensure that the specified bandwidth can be consumed, regardless of our hopes with Cogent. Moreover, surplus connectivity is being sold, which, in cases of non-sale, would simply be idle. We should also not forget that Cogent, as a backbone provider, takes money from everyone. Thus, the traffic that you generate will, in any case, be additionally paid by the provider in whose network this traffic will come.

However, you should not expect that by purchasing a server with such a channel, you will have 10, 40 or 100 Gb / s per stream, this is simply impossible for that kind of money, and often not necessary. Streaming transport at high speed point-to-point can cost a lot of money, in some cases $55,000 for 10 Gb / s, as in the case of the Novy Urengoy - Norilsk FOCL, which we reviewed above. But the fact that excellent connectivity with the Internet as a whole will be ensured is unambiguous. The average speed per stream for most projects is enough to be above 10 Mbit / s, which allows you to host projects with video streaming in Ultra HD quality and provide viewing for 1000-4000 people "online" from one server.

However, in some cases, the rate per stream can be significant even with a small channel charge. So last year in the US, 10 Gb / s home Internet became widespread, when for a modest fee of $ 400 / month it became possible to receive such an “unlimited” channel at home.

In such cases, home routers that provide access to the network via Wi-Fi often turn out to be a “bottleneck” (capable of providing a connection of up to 300 Mbps), as a result of which it is necessary to re-apply a wired connection and even install servers at home, as well as apply productive computers and drives in them, so as not to run into their capabilities when using the channel. Why is this needed? Many people nowadays work with data from home. James Busch, a US radiologist, analyzes patient data from home and new channel saves him a lot of time.

“On average, an x-ray examination contains about 200 megabytes of data, while PET scanning and 3D mammography can take up to 10 gigabytes. Thus, we are forced to process hundreds of terabytes of data. We calculated that we save about 7 seconds on average per study when using a 10 Gb / s connection instead of a gigabit one. It would seem that this is not much, but if we multiply by the number of studies that we conduct per year, which is 20-30 thousand, it turns out that we save about 10 days of productive work only due to the fact that we have improved the connection speed by an order of magnitude.”

Thus, if you need high speed per stream at minimal cost - you need to place your 10, 20, 40 or 100 gigabit server as close to your users as possible. Then it is likely that you will be able to generate traffic to some segments of the Internet at a speed of 1 and even 10 Gbit / s per stream.

Our time offers you unique opportunities for new achievements. Now you can hardly say that some kind of hosting or dedicated server rental service is too expensive, and starting your own business or project has never been so easy. The most productive server configurations are now available, the capabilities of which exceed the capabilities of ten-year-old servers by up to three orders of magnitude in some cases, and at prices that are not much more expensive than hosting in 2005. Everyone can allow real. Traffic has become a thousand times cheaper, and the speed of channels is higher. And it will depend on you how you manage them. Everyone can come up with an interesting Internet project, stop wasting your time in vain. Rent a dedicated server or at least a virtual one and get started today, even if you don't need it yet and you don't know anything about it - it will serve as a good motivation to keep going. Use these opportunities to make our world a better place. Even if you have never had experience in web development and creating Internet projects, it is never too late to start, I once started from 0 and generated more traffic than all of Belarus! I hope my experience will be useful for you. We are building the Internet, join us!

BLACK FRIDAY CONTINUES: 30% discount on the first payment with a promo code BLACK30% when ordering for 1-6 months!

These are not just virtual servers! This is a VPS (KVM) with dedicated drives that can be just as good as dedicated servers, and in most cases better! We made VPS (KVM) with dedicated drives in the Netherlands and the USA (configurations from VPS (KVM) - E5-2650v4 (6 Cores) / 10GB DDR4 / 240GB SSD or 4TB HDD / 1Gbps 10TB available at a uniquely low price - from $29 / month , available with RAID1 and RAID10), do not miss the chance to place an order for a new type of virtual server, where all the resources belong to you, as on a dedicated one, and the price is much lower, with much more productive hardware!

How to build infrastructure corp. class with the use of Dell R730xd E5-2650 v4 servers worth 9,000 euros for a penny? Dell R730xd 2 times cheaper? Only here 2 x Intel Dodeca-Core Xeon E5-2650v4 128GB DDR4 6x480GB SSD 1Gbps 100 TV from $249 in the Netherlands and USA!

Tags: Add tags