We test the ultra-high-speed network and look for its weak points. Why major Internet providers abandoned high-speed tariffs

Last year everything large providers suddenly they began to introduce tariffs at breathtaking speed. Three hundred megabits per second! Five hundred! Gigabit! And then little by little they abandoned them. And now my beloved Online has a maximum of 100 Mbit/s, Beeline has the same amount (with one caveat, which is discussed below), and for some reason the still alive Akado has 150 Mbit/s, although the latter can write anything, everything is believed equally difficult.

Why did providers cut the speed? Various options come to mind, from the high cost of investing in network infrastructure to the consequences of sanctions. But in reality everything is much, much simpler. It turned out like a famous joke that I like to tell.

Leonid Ilyich Brezhnev is asked:

– Why is there no meat in stores?

“We are moving towards communism by leaps and bounds, the cattle can’t keep up with us.”

Cattle is the level of technology that we are used to using.

Bringing gigabit to your apartment is not a cheap task, but it is absolutely possible. And this speed can be ensured. But then things get bad.

To pump even 500 Mbit/s you need a top-end router. Maximum one year ago. And it's better to be completely fresh. Otherwise it will overheat and choke. There are very few such routers in the hands of the population. We use such old stuff that it takes our breath away. Does it work? Well, let it work, no need to change it. Even gigabit ports are still not available everywhere.

To transmit data at such speeds over the air, you need support for 802.11ac/ Both the router and, importantly, the end device. Meanwhile, until now, official deliveries of laptops with 802.11ac to Russia have been prohibited due to the lack of certification from the Ministry of Communications. So only 802.11n and a peak of 450 Mbit/s, but in fact - about 300. And even on the few devices imported into the “gray”, the peak connection speed rarely exceeded 866 Mbit/s.

And how did everything happen in practice?

A man with a router, bought five years ago, and a zoo of equipment that does not support 802.11ac, connects a gigabit tariff and indignantly notices that the speed is not the same! It’s impossible to get more than 300 megabits! They deceived you, crooked creatures! Angry messages to forums, letters and calls to technical support begin. It is impossible to find out the reasons for what is happening without a visit from a specialist. So they start being persecuted too. The experts tell users - buy new routers, update your equipment. This causes a new wave of indignation - oh, you creatures, not only are you wasting money every month, but you’re also spending it on equipment! Yes, I will sue you!

In general, after three months of such bedlam, providers realized that nerves are more expensive. And they removed high-speed tariffs from sites. Most users, in fact, have nowhere to put 100 megabits. And even 50. So no one really noticed the disappearance of tariffs.

They write to me that small providers are still trying to sell speeds of 400 and even 500 Mbit/s, but either their users are pumped up, or technical support has titanium nerves. MGTS has a 500 megabit tariff in its assortment, but they have nowhere to go, with such pathos they dragged “gigabit to every home” that it is impossible to refuse it. Beeline sells 365 megabits at the most expensive tariff, but only in conjunction with its router (by the way, very cool - the first implementation of 802.11ac on the Mediatek chipset). In any case, I’m afraid that we won’t be able to return to the gigabit level in the mass segment for another couple of years.

I was in no rush to upgrade my home network from 100Mbps to 1Gbps, which is quite strange for me since I transfer a lot of files over the network. However, when I spend money on a computer or infrastructure upgrade, I believe I should get an immediate performance boost in the apps and games I run. Many users like to treat themselves with a new video card, central processor and some gadget. However, for some reason, networking equipment does not attract such enthusiasm. Indeed, it is difficult to invest the money you earn in network infrastructure instead of another technological birthday gift.

However, my bandwidth requirements are very high, and at one point I realized that a 100 Mbit/s infrastructure was no longer enough. All of my home computers already have integrated 1 Gbps adapters (on their motherboards), so I decided to take the price list of the nearest computer company and see what I would need to convert my entire network infrastructure to 1 Gbps.

No, a home gigabit network is not that complicated at all.

I bought and installed all the equipment. I remember that it used to take about a minute and a half to copy a large file over a 100 Mbps network. After an upgrade to 1 Gbit/s, the same file began to be copied in 40 seconds. The performance increase was pleasantly pleasing, but still I did not get the tenfold improvement that could be expected from comparing the throughput of 100 Mbps and 1 Gbps of the old and new networks.

What is the reason?

For a gigabit network, all parts must support 1 Gbps. For example, if you have Gigabit network cards and associated cables installed, but the hub/switch only supports 100 Mbps, then the entire network will operate at 100 Mbps.

The first requirement is a network controller. It is best if each computer on the network is equipped with a gigabit network adapter (separate or integrated on the motherboard). This requirement is the easiest to satisfy, since most motherboard manufacturers have a couple recent years integrate gigabit network controllers.

The second requirement is that the network card must also support 1 Gbit/s. There is a common misconception that gigabit networks require Cat 5e cable, but in fact even old Cat 5 cable supports 1 Gbps. However, Cat 5e cables have best characteristics, so they will be a more optimal solution for gigabit networks, especially if the cables are of decent length. However, Cat 5e cables are still the cheapest today, since the old Cat 5 standard is already outdated. Newer and more expensive Cat 6 cables offer even better performance for gigabit networks. We'll compare the performance of Cat 5e vs Cat 6 cables later in our article.

The third and probably most expensive component in a gigabit network is the 1 Gbps hub/switch. Of course, it is better to use a switch (perhaps paired with a router), since a hub or hub is not the most intelligent device, simply broadcasting all network data to all available ports, which leads to a large number of collisions and slows down network performance. If you need high performance, then you cannot do without a gigabit switch, since it forwards network data only to the desired port, which effectively increases the network speed compared to a hub. A router usually contains a built-in switch (with multiple LAN ports) and also allows you to connect your home network to the Internet. Most home users understand the benefits of a router, so a gigabit router is a very attractive option.

How fast should gigabit be? If you hear the prefix "giga", you probably mean 1000 megabytes, while a gigabit network should provide 1000 megabytes per second. If you think so, then you are not alone. But, alas, in reality everything is different.

What is gigabit? This is 1000 megabits, not 1000 megabytes. There are 8 bits in one byte, so let's just do the math: 1,000,000,000 bits divided by 8 bits = 125,000,000 bytes. There are about a million bytes in a megabyte, so a gigabit network should provide a theoretical maximum data transfer rate of about 125 MB/s.

Sure, 125 MB/s doesn't sound as impressive as gigabit, but think about it: a network at that speed should theoretically transfer a gigabyte of data in just eight seconds. And a 10 GB archive should be transferred in just a minute and 20 seconds. The speed is incredible: just remember how long it took to transfer a gigabyte of data before USB sticks became as fast as they are today.

Our expectations were high, so we decided to transfer the file over a gigabit network and enjoy speeds close to 125 MB/s. We don't have any specialized fancy hardware: a simple home network with some old but decent technology.

Copying a 4.3GB file from one home computer to another performed at an average speed of 35.8MB/s (we ran the test five times). This is only 30% of the theoretical ceiling of a gigabit network of 125 MB/s.

What are the causes of the problem?

Selecting components for installing a gigabit network is quite simple, but getting the network to work at maximum speed is much more difficult. The factors that can cause a network to slow down are numerous, but we've found that it all comes down to how fast the hard drives are able to transfer data to the network controller.

The first limitation that needs to be taken into account is the interface of the gigabit network controller with the system. If your controller is connected via the old PCI bus, then the amount of data it can theoretically transfer is 133 MB/s. For Gigabit Ethernet's 125 MB/s throughput, this seems sufficient, but remember that the PCI bus bandwidth is shared throughout the system. Each additional PCI card and many system components will use the same bandwidth, reducing the resources available to the network card. Controllers with the new PCI Express (PCIe) interface do not have such problems, since each PCIe line provides at least 250 MB/s of bandwidth, and exclusively for the device.

Next important factor, which affects the speed of the network - cables. Many experts point out that if network cables are laid next to power cables that are sources of interference, low speeds are guaranteed. Long cable lengths are also problematic, as Cat 5e copper cables are certified to a maximum length of 100 meters.

Some experts recommend running cables to the new Cat 6 standard instead of Cat 5e. Often such recommendations are difficult to justify, but we will try to test the effect of cable category on a small gigabit home network.

Let's not forget about the operating system. Of course, this system is rarely used in a gigabit environment, but it is worth mentioning that Windows 98 SE (and older operating systems) will not be able to take advantage of gigabit Ethernet, since the TCP/IP stack of this operating system is barely able to load a 100-Mbps connection in to the fullest. Windows 2000 and newer versions of Windows will be fine, although older operating systems will need some tweaking to ensure they make the most of the network. We'll be using Windows Vista 32-bit for our tests, and while Vista doesn't have the best reputation for some tasks, it supports gigabit networking from the start.

Now let's move on to hard drives. Even the older IDE interface with the ATA/133 specification should be sufficient to support a theoretical file transfer speed of 133 MB/s, and the newer SATA specification fits the bill as it provides at least 1.5 Gb/s (150 MB) of throughput. /With). However, while cables and controllers can handle data transfer at such speeds, the hard drives themselves cannot.

Let's take for example a typical modern 500 GB hard drive, which should provide a constant throughput of about 65 MB/s. At the beginning of the plates (outer tracks) the speed may be higher, but as you move to the inner tracks the throughput drops. Data on internal tracks is read slower, at about 45 MB/s.

We thought we had covered all possible bottlenecks. What was left to do? We needed to run some tests and see if we could get the network performance up to the theoretical limit of 125 MB/s.

Test configuration

Test systems Server system Client system
CPU Intel Core 2 Duo E6750 (Conroe), 2.66 GHz, FSB-1333, 4 MB cache Intel Core 2 Quad Q6600 (Kentsfield), 2.7 GHz, FSB-1200, 8 MB cache
Motherboard ASUS P5K, Intel P35, BIOS 0902 MSI P7N SLI Platinum, Nvidia nForce 750i, BIOS A2
Net Built-in Abit Gigabit LAN controller Integrated nForce 750i Gigabit Ethernet Controller
Memory Wintec Ampo PC2-6400, 2x 2048 MB, DDR2-667, CL 5-5-5-15 at 1.8 V A-Data EXTREME DDR2 800+, 2x 2048 MB, DDR2-800, CL 5-5-5-18 at 1.8 V
Video cards ASUS GeForce GTS 250 Dark Knight, 1 GB GDDR3-2200, 738 MHz GPU, 1836 MHz shader unit MSI GTX260 Lightning, 1792 MB GDDR3-1998, 590 MHz GPU, 1296 MHz shader unit
Hard drive 1 Seagate Barracuda ST3320620AS, 320 GB, 7200 rpm, 16 MB cache, SATA 300
Hard drive 2 2x Hitachi Deskstar 0A-38016 in RAID 1, 7200 rpm, 16 MB cache, SATA 300 Western Digital Caviar WD50 00AAJS-00YFA, 500 GB, 7200 rpm, 8 MB cache, SATA 300
power unit Aerocool Zerodba 620w, 620 W, ATX12V 2.02 Ultra HE1000X, ATX 2.2, 1000 W
Network switch D-Link DGS-1008D, 8-Port 10/100/1000 Unmanaged Gigabit Desktop Switch
Software and drivers
OS Microsoft Windows Vista Ultimate 32-bit 6.0.6001, SP1
DirectX version DirectX 10
Graphics driver Nvidia GeForce 185.85

Tests and settings

Tests and settings
Nodesoft Diskbench Version: 2.5.0.5, file Copy, Creation, Read, and Batch Benchmark
SiSoftware Sandra 2009 SP3 Version 2009.4.15.92, CPU Test = CPU Arithmetic / Multimedia, Memory Test = Bandwidth Benchmark

Before we move on to any benchmarks, we decided to test the hard drives offline to see what kind of throughput we can expect in an ideal scenario.

We have two PCs running on our home gigabit network. The first, which we will call a server, is equipped with two disk subsystems. The main hard drive is a 320 GB Seagate Barracuda ST3320620AS, a couple of years old. The server operates as a NAS with a RAID array consisting of two 1 TB Hitachi Deskstar 0A-38016 hard drives, which are mirrored for redundancy.

We called the second PC on the network a client; it has two hard drives: both 500 GB Western Digital Caviar 00AAJS-00YFA, about six months old.

We first tested the speed of the server and client system hard drives to see what kind of performance we could expect from them. We used the hard drive test in SiSoftware Sandra 2009.

Our dreams of achieving gigabit file transfer speeds were immediately dashed. Both of the single hard drives achieved a maximum read speed of about 75 MB/s in ideal conditions. Since this test is carried out in real conditions, and the drives are 60% full, we can expect read speeds closer to the 65 MB/s index that we received from both hard drives.

But let's look at the performance of RAID 1 - the best thing about this array is that the hardware RAID controller can increase read performance by fetching data from both hard drives at the same time, similar to RAID 0 arrays; but this effect occurs (as far as we know) only with hardware RAID controllers, but not with software RAID solutions. In our tests, the RAID array delivered much faster read performance than a single hard drive, so chances are good that we'll get high network file transfer speeds from the RAID 1 array. The RAID array delivered an impressive 108 MB/s peak throughput, but at In reality, performance should be close to the 88 MB/s index, since the array is 55% full.

So we should get about 88 MB/s over a gigabit network, right? That's not nearly as close to the gigabit network's 125 MB/s ceiling, but it's much faster than 100-Mbit/s networks that have a 12.5 MB/s ceiling, so getting 88 MB/s in practice wouldn't be bad at all.

But it's not that simple. Just because the read speed of hard drives is quite high does not mean that they will write information quickly in real conditions. Let's run some disk writing tests before using the network. We'll start with our server and copy the 4.3GB image from the high-speed RAID array to the 320GB system hard drive and back again. We will then copy the file from the client's D: drive to its C: drive.

As you can see, copying from a fast RAID array to drive C: gave an average speed of only 41 MB/s. And copying from the C: drive to a RAID 1 array resulted in a drop of only 25 MB/s. What's happening?

This is exactly what happens in reality: hard drive C: was released a little over a year ago, but it is 60% full, probably a little fragmented, so it doesn’t break records in terms of recording. There are other factors, namely how fast the system and memory in general works. RAID 1 is made from relatively new hardware, but due to redundancy, information must be written to two hard drives at the same time, which reduces performance. Although RAID 1 can provide high read performance, write speed will have to be sacrificed. Of course, we could use a striped RAID 0 array, which gives high write and read speeds, but if one hard drive dies, then all the information will be corrupted. Overall, RAID 1 is more the right option, if you value the data stored on the NAS.

However, all is not lost. Digital Caviar's new 500GB drive is capable of writing our file at 70.3MB/s (average across five test runs), and also delivers a top speed of 73.2MB/s.

With that said, we were expecting a real-world maximum transfer speed of 73 MB/s over a gigabit network from the NAS RAID 1 array to the client's C: drive. We'll also test file transfers from the client's C: drive to the server's C: drive to see if we can realistically expect 40MB/s in that direction.

Let's start with the first test, in which we sent a file from the client's C: drive to the server's C: drive.

As we can see, the results correspond to our expectations. A gigabit network, theoretically capable of 125 MB/s, sends data from the client's C: drive at the fastest possible speed, probably around 65 MB/s. But as we showed above, the server's C: drive can only write at about 40 MB/s.

Now let's copy the file from the server's high-speed RAID array to the C: drive of the client computer.

Everything turned out as we expected. From our tests, we know that the client computer's C: drive is capable of writing data at about 70 MB/s, and gigabit network performance came very close to that speed.

Unfortunately, our results do not come close to the theoretical maximum throughput of 125 MB/s. Can we test the maximum network speed? Sure, but not in a realistic scenario. We will try to transfer information across the network from memory to memory to bypass any bandwidth limitations of hard drives.

To do this, we will create a 1 GB RAM disk on the server and client PCs, and then transfer the 1 GB file between these disks over the network. Since even slow DDR2 memory is capable of transferring data at speeds of more than 3000 MB/s, network bandwidth will be the limiting factor.

We got a maximum speed of 111.4 MB/s on our Gigabit network, which is very close to the theoretical limit of 125 MB/s. An excellent result, there is no need to complain about it, since the actual throughput will still not reach the theoretical maximum due to the transmission of additional information, errors, retransmissions, etc.

The conclusion will be as follows: today, the performance of information transfer over a gigabit network is limited by hard drives, that is, the transfer speed will be limited by the slowest hard drive participating in the process. Having answered the most important question, we can move on to speed tests depending on the cable configuration to make our article complete. Could optimizing cabling bring network speeds even closer to the theoretical limit?

Since performance in our tests was close to expected, we're unlikely to see any improvement by changing the cable configuration. But we still wanted to run tests to get closer to the theoretical speed limit.

We conducted four tests.

Test 1: default.

IN this test we used two cables about 8 meters long, each connected to a computer at one end and a gigabit switch at the other. We left the cables where they were laid, that is, next to the power cables and sockets.

This time we used the same 8-gauge cables as in the first test, but moved the network cable as far away from power cables and extension cords as possible.

In this test, we removed one of the 8-m cables and replaced it with a meter of Cat 5e cable.

In the last test, we replaced the 8's Cat 5e cables with the 8's Cat 6 cables.

In general, our testing of different cable configurations did not show a significant difference, but conclusions can be drawn.

Test 2: reducing interference from power cables.

On small networks like our home network, tests show that you don't have to worry about running LAN cables near electrical cables, outlets, and extension cords. Of course, the interference will be higher, but this will not have a serious effect on the network speed. However, with all that said, it is better to avoid laying it near power cables, and you should remember that the situation may be different on your network.

Test 3: reduce the length of the cables.

This is not a completely correct test, but we tried to detect the difference. It should be remembered that replacing an eight-meter cable with a meter cable may result in the result being simply different cables than differences in distance. In any case, in most tests we do not see a significant difference, with the exception of an abnormal increase in throughput during copying from the client C: drive to the server C: drive.

Test 4: Replace Cat 5e cables with Cat 6 cables.

Again, we found no significant difference. Since the cables are about 8 meters long, longer cables can make a big difference. But if your length is not the maximum, then Cat 5e cables will work quite well on a home gigabit network with a distance of 16 meters between two computers.

It is interesting to note that manipulating the cables had no effect on data transfer between computer RAM disks. It's clear that some other component on the network was limiting performance to the magic number of 111 MB/s. However, such a result is still acceptable.

Do gigabit networks provide gigabit speeds? As it turns out, they almost do.

However, in real conditions, network speed will be seriously limited by hard drives. In a synthetic memory-to-memory scenario, our gigabit network produced performance very close to the theoretical limit of 125 MB/s. Regular network speeds, taking into account the performance of hard drives, will be limited to levels from 20 to 85 MB/s, depending on the hard drives used.

We also tested the effects of power cords, cable length, and upgrading from Cat 5e to Cat 6. On our small home network, none of the factors mentioned impacted performance significantly, although we do note that on a larger, more complex network with longer lengths these factors can have a much stronger influence.

In general, if you transfer a large number of files on your home network, then we recommend installing a gigabit network. Upgrading from a 100Mbps network will give you a nice performance boost; at least you'll get a 2x increase in file transfer speeds.

Gigabit Ethernet on your home network can provide greater performance gains if you read files from a fast NAS storage device that uses hardware RAID. On our test network, we transferred a 4.3GB file in just one minute. Over a 100 Mbps connection, the same file took about six minutes to copy.

Gigabit networks are becoming more and more accessible. Now all that remains is to wait for the speeds of hard drives to rise to the same level. In the meantime, we recommend creating arrays that can bypass the limitations modern technologies HDD. Then you can squeeze more performance out of your gigabit network.

The cat and I a little “dispersed” the twisted pair cable that came into the apartment

It is usually believed that the “ceiling” of DSL is 20-30 Mbit/s, FTTB (fiber to the building plus twisted pair to the apartment) is 100 Mbit/s, and PON supports speeds above 100 megabits. My colleagues and I decided to dispel the myths and “overclock” FTTB for a single provider.

The first thing we did was climb into the attic of the house. Already, at least 1 Gbit/s is coming to every home, and the home node has a suitable gigabit “copper” port. That is, we can simply take and switch any apartment where our cable is located to a suitable port and provide faster speeds 400 Mbps.

Actually, we have already done this as part of beta tests, and recently we launched commercial services in Moscow with new speeds. Yes, you can most likely connect.

What was that, cap?

Our backbone and city networks have a reserve of free capacity that significantly exceeds the needs of customers even during hours of the highest, maximum possible load. Take, for example, the holiday dearly beloved by me and my cat. New Year, in which those grandmothers who have already made friends with the Internet and Skype receive more congratulations.

What is the difference with PON

The fact is that our FTTB network, which allows us to do all this, already exists. No modifications required. The cable already enters your apartment. All the wires are there. The optics reach nodes in homes. You just need to take and switch the cable to another port on the switch. All! Such a channel is already approaching you, but you didn’t know about it. And PON needs to be built - this is new infrastructure throughout the city. There is another ambush - optical cores diverge from regional PBXs, which require proprietary client devices. But with a regular FTTB network, you can use anything. Although there are few devices adapted for L2TP.

How it works

A large transport canal comes from the main ring to the city level. Further around the city there are several large rings. From them there are smaller rings or mesh structures, “stars” are made at the entrance level. There are optical transport links from the first level to the closet in the entrance. On the access level switch, we reconnect the cable to a gigabit port... and that’s it, we now have a gigabit link to the client.

These are the results of my colleague without a cat, but with Wi-Fi (801.11 ac).

Technical feasibility

After I was able to test this at several points, we installed such links for all company employees working on the project. Quite quickly, I must say. There are almost no technical limitations: the only thing is that there are literally a couple of dozen blocks in Moscow where the equipment needs to be slightly modernized, but we are already working on this.

More restrictions

You will laugh, but we have not found ways to utilize such speeds. So, it turned out that such a channel can only be filled with very specific tasks - this is either CCTV from a bunch of cameras, or HD video for all family members at the same time, or the tasks of a photographer uploading pictures somewhere. Also – network drives. In general, with the exception of torrents, it’s a provider’s dream: the client once a day makes a “bang” with a heavy file and is wildly happy.

But a number of other things have arisen that directly relate to marketing. First, almost all resources deliver content much slower than the channel can receive. This is the eternal problem “why is my Internet slow, but tests show normal speed.” Because we need resources that can provide a large number delivering content to clients at high speed. So someone will have to give up illusions; not every resource meets these requirements.
100Mbit/s is already a very fast Internet connection for the vast majority of users. Even higher speeds may be required for those who have to work with very large files.


Everything in this photo is correct, including the router

The link to the desktop or laptop must be copper - Wi-Fi, especially in the presence of interference from other networks, simply will not allow the channel to be distributed at such a speed. That's why best option– desktop on cable, tablets and phones – by air.

The end devices themselves can also cut traffic. Naturally, you must have a device that supports 400 Mbps (router or network card). In beta, however, a couple of surprises were revealed with the fact that not all devices can actually handle such traffic, despite statements about this.

Tests

This is where the fun begins. We took 10 high-performance devices with L2TP support.

Gigabit is fast, especially for home use, so routers must be appropriate. I’ll say right away that it was not possible to cover all models and quickly test them, so we focused on supporting a gigabit connection, dual-band Wi-Fi and good reviews from users.

Our shortlist:
Asus RT-68U
D-Link DIR 825 D1
Pre-sale sample from new manufacturer Totolink
Zyxel Keenetic Ultra
Apple Air port Time capsule

Once I tested the devices according to our checklists in the office, it was time to test the devices in the field, here you can evaluate the real performance of the device.

For this action, I tried to prepare thoroughly, took a MacBook Pro 15 retina (late 2012) - my main work laptop, plugged a 128GB SSD into a separate desktop and connected the Asus PCE-AC68 Wi-Fi adapter there so that nothing would interfere with overclocking, also I took the Totolink A2000UA USB Wi-Fi adapter with 802.11ac support just in case. In addition, I brought an iPad mini, iPhone 5 and Samsung Galaxy note - we will test Wi-Fi on them.

To check the speed, in addition to the usual resources, such as speedtest, file downloading, I installed Iperf on one of our servers connected via a gigabit link to our core network. It seems like everything turned out something like this:

A little about the test methodology

In many router reviews that I have seen, stands with programs for generating traffic are usually assembled. We decided to do it differently: to conduct testing the same way a subscriber would do when checking the speed of our Internet access.
The main tools were:
1) Speedtest.net – you can’t live without it
2) Mirror.yandex.ru
3) Iperf – a little synthetics
4) Youtube

The list is small, but on these resources you can evaluate how fast Internet access works, so to speak natural product and no synthetics.

Let's start testing

First, let's see which Wi-Fi networks are already nearby

“People’s” 2.4GHz band – no more and no less

5GHz - we even got here, but there are not many networks, two of them are ours

Asus RT-68U


Top router from Asus. The hardware of the device inspires respect: a chip from Broadcom BCM4708A, 256MB RAM, 128MB Flash, support for 802.11ac and Beamforming is present.

Patch cord: speed test showed 224Mbps for download and 196Mbps for Upload

Good result, we continue testing, Iperf is next.

The unexpected happened during this test. Either the router started to glitch, or iperf, but the results did not rise above 50Mbps. No problem, let's look at more life test– downloading a file from Yandex.

Almost 35MB per second!

I ran the tests a few more times, then decided to clean the SSD; at such speeds it quickly clogged.

Now let's take a look at how fast Wi-Fi works. Wireless networking is a tricky thing, and many factors can affect the final performance. The laptop was located 4 meters from the router in a straight line.

The speed test showed almost 165Mbps on Download and 166 on Upload. Worthy! Especially when it comes to the 2.4GHz band

Iperf showed similar values:

Let's switch now to 5GHz. Since the router can work with 802.11ac, but my work Macbook does not, I connected an external adapter that supports 802.11ac 2x2.

The connection was successful... Let's take a look at the speed test:

209Mbps on Download 111 on Upload, most likely 210Mbps is the current ceiling for router performance over L2TP. Let's hope that Asus will fix this in new firmware.

Iperf showed even lower results:

D-Link Dir 825 D1

Next in line is the representative of the mid-price range D-Link DIR825. The router is equipped with a Dual-Band Wi-Fi module, which is currently rare for the mid-price range. Let's see what this router is capable of.

Connection via patch cord

Let's move on to testing Wi-Fi networks. The router has two Airgain antennas, so I expect high speeds over Wi-Fi as well.

For a network operating in the 2.4GHz range:

This frequency is maximally loaded, so this result was, in principle, expected. How will 5GHz manifest itself?

130-150Mbps. When tinkering with the settings in detail, it turned out that if you disable Wi-Fi network encryption, performance increases. Of course, I didn’t discover America, but I didn’t find such a pattern on other routers.

Let's move on to the next test subject - Totolink

This router has similar characteristics to the D-Link DIR 825, they are built on the same SoC - RTL8197D, but in this router the radio module supports 802.11ac. It will be interesting to evaluate its capabilities in real conditions.

Patch cord:

Eh... okay, I'll leave it without comment.

We are getting closer to reality.

To be honest, I didn’t think that the “old man” RTL8197D was capable of pumping L2TP through itself at such speeds. This makes the results of Wi-Fi network testing more interesting.

“People’s” frequency – 2.4GHz

Both speedtest and iperf showed almost identical results.
At 5GHz the speed should be prohibitive! Maybe…

But no, although the connection showed that the connection was established at a speed of 867Mbps.

Iperf is trying to bring him down to the ground, and he is doing well.

Our latest marathon participant is Zyxel Keenetic Ultra

A popular model among L2TP devices. It accelerates well and works stably. We connect the patch cord and run the speed test:

And I’ll download the Fedora distribution, which has already become native for the duration of the tests:

Unfortunately, this model from Zyxel does not support 802.11ac, so I will be content with 802.11n. Let's get started!

Let's look at 5GHz

Neither more nor less – standard. This situation did not suit me, and I decided to connect a new Time Capsule with support for 802.11ac (very conventional for the PCT model) to the router.

Here! It’s a pity that manufacturers don’t include a time capsule with their routers.

What if you measure the speed on a phone/tablet?

Most users, especially those who are not familiar with the methodologies of various performance tests, simply launch the application on their phone. I'll do that too.

There was an iPhone, tablet and Android phone available. There is no point in testing the connection on each router, so I settled on the latest router model.

For 2.4GHz and 5GHz, respectively, here we have hit the performance ceiling of the Wi-Fi module on the phone. Devices on Android showed approximately the same results, while on a tablet this speed was obtained when connected to a network at 5GHz; at 2.4GHz it will most likely be lower:

Well, tests on the street:

What happened?

The process of testing the new service greatly fascinated my cat and me, so in the end we tested 10 routers from different manufacturers, price categories with different “stuffing”. Here are the most interesting:
  • Zyxel Keenetic Ultra
  • D-Link DIR825
  • Toto-Link
  • Asus RT-68U
  • Zyxel Keenetic Giga II
  • TP-Link Archer C7 v.1
  • D-Link DIR 850L
  • Buffalo WZR-1759DHP
  • Netgear R7000 "Highthawk"
  • Arcadian
So if you have an SSD or RAID from an HDD at home, there is good wifi router, and if you solve specific problems that require fast internet, then the new service will be useful to you.

Price

The service is new, a description will appear on the website in a couple of days. The cost is 1850 rubles per month, if without our router.

UPD, according to requests in the comments:

Asus RT-68U D-Link DIR 825 D1 Toto-Link Zyxel Keenetic Ultra
Via cable (WAN-LAN)
Speedtest D: 224.2 Mbps U: 196.77 Mbps
D: 352.16 Mbps U: 370.3 Mbps D: 462.6 Mbps U: 255.8 Mbps D: 408.22 Mbps U: 275.59 Mbps
Iperf 26.3Mbps 354 Mbps 379 Mbps ~35MB/s ~43 MB/s ~50MB/s ~52MB/s
Wi-Fi 2.4GHz
Speedtest D: 164.53 Mbps U: 165.68 Mbps D: 86.72 Mbps U: 57.82 Mbps D: 155.01 Mbps U: 118.62 Mbps D: 131.57 Mbps U: 113.53 Mbps
Iperf 140Mbps 52.5 Mbps 152Mbps 132 Mbps
WiFi 5GHz
Speedtest D: 209.4 Mbps U: 111.38 Mbps D: 148.27 Mbps U: 149.45 Mbps D: 233 Mbps U: 132.76 Mbps D: 185.4 Mbps U: 181.07 Mbps
Iperf 163Mbps 130 Mbps 192 Mbps 171 Mbps

Massachusetts technological university in May 1993, the world's first online newspaper, The Tech, was launched.

By 2008, the total distribution speed exceeded 172 Gbit/s, which amounted to 1/4 of the total traffic of the Moscow traffic exchange point MSK-IX. About 3 thousand requests from clients per second - 10 million per hour, 240 million per day. 40,000 thousand packets per second on the network interface. 15,000 interrupts per second. There are approximately 1200 processes in the top. Loading on an 8-core machine is 10-12 during peak hours. And still, some of the requests dropped. We didn't have time to serve. Unfortunately, find modern meaning peer-to-peer traffic failed, who knows - share in the comments for comparison.

A regional tracker, torrents.net.ua, appeared in Ukraine in August 2005. The need to create a resource was due to the lack of high-speed and unlimited access to global traffic for most users in Ukraine.

Until September 2008, the tracker was closed to users not from the UA-IX zone, so the number of users grew at a low rate.

The first hosting providers

What about website hosting? At first, there were no hosting providers as such. The websites were hosted on the servers of universities and organizations with a permanent connection to the Internet. It is now quite problematic to track what happened in the period 1991-1995. In 1995, the Angelfire service offered as much as 35 KB of space for user pages for free, and GeoCities - as much as 1 MB. For more information about the first steps of hosting, read the article of the same name “First Steps of Hosting”, which was published on our blog earlier and is perhaps one of the most complete today.

$200/month for 200 MB of server quota and 3000 MB of outgoing traffic (500 MB for minimum tariff plan), and traffic over the limit was paid at the rate of $55 to $27 per GB). You could also connect a “dedicated line” for your website, the tariffs were as follows: 128K - $395 / month, 384K - $799 / month, 1M - $1200 / month. Connecting a “channel” and activating hosting also included an installation fee of about one monthly fee. At the end of 2000, the same provider offered unlimited disk space with payment only for traffic and reduced the cost of traffic to $40 for 20 GB. And already in 2002 he lowered tariffs to $20, made traffic “unlimited” and reintroduced quota restrictions.

Also interesting are the prices for renting the first dedicated servers in 2000:

A server with an 8 GB HDD looks like a real “fossil” today. But what can I say, I personally used PCs with HDDs before 2004, where the useful quota was about 7 GB. And of course, paying $5000+/month for 6 Mbps to the server looks terrible now. Later the price was reduced to $300/Mbit, but still it was not low.

It goes without saying that the reduction in prices for connectivity and the cost of Internet access was due to an increase in the number of subscribers and the construction of new communication channels, including underwater optical highways. When you are faced with the complexity of laying cables along the ocean floor and find out the approximate cost of the project, it becomes clear why 1 Mbit/s across the Atlantic could cost $300/month or even more. You can find out more about the history of the development of backbone underwater Internet networks in our article:

In Ukraine and the Russian Federation, the process of hosting your own websites began, perhaps, with free hosting narod.ru from Yandex, in 2000:

There was also a similar project from mail.ru - boom.ru, but this free hosting did not receive such distribution as the People. Subsequently, the free Yandex hosting was absorbed by the most successful free website builder and hosting of 2008-2010 - “uCoz” and for the narod.ru domain it is now possible to build a website using the “uCoz” tools. "Yandex" abandoned "People" due to development social networks and decreased interest in the service of building your own websites.

Until 2002, in Ukraine, it was profitable to host your own servers only with home network providers, although the majority kept their servers in offices and even at home due to very expensive traffic for the collocation service, although this violated the terms of service for home subscribers. Many people simply preferred to use regular desktop computers for these purposes and not spend money on “server” hardware. Such elders are still found today. But if back then it was possible to understand why you wanted to make “hosting” for yourself at home, now it’s difficult to understand. And we’re not talking about people who like to test something and need a server at home for this.

The situation abroad was better, because there the Internet became available to the population earlier and the development process began earlier. The Netherlands is gradually becoming a “mecca” for hosting servers, as it offers a good geographical location, which means connectivity with many operators, low electricity prices, and favorable legislation that promotes the growth of the IT sector.

So in 1997, two commercial airline pilots decided to start a company that would help other companies get a presence on the Internet by creating an Internet directory, as well as providing services for creating and hosting websites and connecting to the Internet. The Internet Archive preserved a 1998 version of the website, which, however, did not contain anything except the contact:

Although, as we see, there was another move - the amount of included RAM became much less by default :)

At the same time, in Ukraine, one of the largest cable Internet and television providers, Volya, realized that building its own data center was a huge need. Since home Internet subscribers mainly “download” traffic, the outgoing channel remains practically free and unused. And these are hundreds of megabits, which could easily be sold by placing dedicated servers for subscribers. Plus, it is possible to save quite a bit, since a lot of subscribers could use resources located in the data center instead of downloading from expensive foreign servers.

This is how the Volya data center arose, which already in 2006 offered the following conditions:

In fact, offering Ukrainian traffic without accounting, with payment for consumed foreign traffic. It is noteworthy that incoming foreign traffic was an order of magnitude more expensive than outgoing traffic, which is understandable, because it was used by home Internet subscribers. Plus, as a rule, for servers that generate traffic, the request traffic is small and ranges from 2 to 30% of the outgoing traffic, depending on the type of resources that are hosted on the server.

So, if these are web pages with a large number of elements, then the amount of request traffic is higher, since the successful loading of each element is confirmed, which leads to an increase in the growth of traffic incoming to the server. Also, incoming traffic can be generated by subscribers in cases where they upload something to the server. When downloading files, the % of incoming traffic is insignificant and is less than 5% of outgoing traffic in most cases.

It is also interesting that hosting your own server in the Volya data center is simply unprofitable, since the cost is the same as when renting. In fact, the Volya data center offers servers of various classes for rent for free, depending on the selected tariff plan.

Why can servers be free? The answer is very simple. The equipment is standardized and purchased in large quantities. In fact, in this version it is all easier to maintain, easier to administer, automate, and fewer man-hours are required. When placing subscriber servers on “colo”, a number of problems arise, starting from the fact that the server may not be standard and may not fit in the rack, you will have to allocate more units for placement than originally planned, or refuse the subscriber, citing a non-standard case, ending in that it is necessary to allow the subscriber to enter the site, provide the opportunity to conduct physical work with the server, store spare components on site and allow engineers if replacement is necessary.

Thus, “colo” turns out to be more expensive to maintain and providing it at lower rates for a data center makes no sense.

In Russia at this time, data centers went further and began to offer conditionally unlimited traffic for free. For example, Agave offers the following conditions:

Incoming and outgoing traffic is unlimited and completely free. The following conditions must be met:

Incoming traffic should not exceed 1/4 from outgoing.
Outgoing foreign traffic should not be larger than the outgoing Russian.
Note: traffic is divided into Russian and foreign based on geography.
Failure to comply with these conditions will be subject to the following tariffs:

Exceeding incoming by 1/4 outgoing is paid at a rate of 30 rubles/GB.
The excess of outgoing foreign over outgoing Russian is paid at a rate of 8.7 rubles/GB


It is noteworthy that for simplicity of accounting in the data center they did not bother with the list of networks that do not belong to MSK-IX, SPB-IX (traffic exchange points in Moscow and St. Petersburg), which, moreover, have long been united among themselves to ensure good connectivity between St. Petersburg and M9 or M10 (traffic exchange points in Moscow) and vice versa. Since the Internet was still not widespread in the regions and the percentage of traffic was really small, especially over leased lines. What can I say, Norilsk received its own fiber only in September 2017, only this year, and became the last large Russian city to receive its own fiber optic line! The cost of the project was about 40 million US dollars, the total length of the fiber-optic line from Novy Urengoy is 986 km, the capacity is 40 Gbit / s with the possibility of expansion to 80 Gbit / s in the future.

It's funny to see how in 2017 some people are able to rejoice high speed Internet, which has been available to most of us for over 10 years now:

Well, firstly, I have already started watching YouTube, before that I watched it once a year, when I went to Moscow. I just opened it like that... and they couldn’t get me out, but if I also went to torrent and downloaded something, then that’s it... And now I can calmly watch. Let’s say a video comes out, I watch it once a week and I don’t need to watch this entire set of information at once. And I can communicate with people on Skype! This is absolutely straight! I’m walking like this and filming: “I’m coming guys, look, it’s winter!”, the only negative is that the IPhone turns off in the cold.

You can watch a detailed video about the FOCL project itself here: part 1 , part 2 , part 3 , part 4. The only thing, keep in mind that the journalists made inaccuracies; the same satellite channel, according to them, was only 1 Gbit/s per city, although in fact the total traffic before the introduction of fiber-optic communication lines was about 2.5 Gbit/s. Well, you need to understand that the problem was not only in speed, but also in the high ping that was obtained when using satellite internet, which I came back to again at the time of the accident on the fiber-optic line.

At the end of 2006, the first sites with online films, file hosting services and other similar resources appeared in the Russian Federation, and in order to reduce the costs of foreign traffic, since Ukrainian traffic can be impressive and do not fit into the ratios prescribed by the same Agave, some of the servers are large projects they try to place them in data centers with a connection to UA-IX or create additional Russian traffic artificially, using torrents that were distributed exclusively to Russian users, and in some cases file hosting services that were available exclusively for Russian IP addresses. As a result, if in Ukraine I wanted to download fully and at good speed, many of the users bought Russian VPN, since the speed for the same ifolder.ru was always higher from the Russian Federation:

File hosting services, despite the popularity of torrent, are gaining explosive popularity, since the download speed from them is often much higher than when using torrent, and there is no need to conduct distributions and maintain a rating (when you give more than you download, or at least no more , than 3 times less). This is all due to the asymmetric DSL channel, when the upload speed was significantly lower than the receive speed (10 times or more), and we must not forget that not every user wanted to “seed” and store many files on their computer.

Thus, Wnet paid the subscriber for Ukrainian traffic at the rate of $1 per GB, while foreign traffic cost the subscriber $10 per GB, provided that the ratio of outgoing to incoming traffic was 4/1. Of course - this was still a significant fee, because the traffic was free only if there was 10 times more Ukrainian traffic. Thus, in order to generate free 9 Mbit/s to foreign countries, it was necessary to generate 90 Mbit/s to Ukraine. Which was completely different from Agave’s proposal, where it was enough for foreign traffic not to exceed Russian traffic.

Therefore, the previously considered offer from the Volya data center was much more profitable than the offer from Wnet, which, moreover, on October 1, 2006, decided to withdraw from the Ukrainian traffic exchange point UA-IX, since UA-IX refused to sell more ports that Wnet needed, possibly as a result of a “peer-to-peer war,” namely lobbying for the interests of other providers with whom Wnet began to compete, and perhaps due to the lack of those. the possibility of providing additional ports, or maybe because “Wnet” violated the agreement and built peer-to-peer connections with other participants in the exchange point (a feature of the exchange rules

Thanks to this, in 2008 Volya already had 20 Gbit/s connection in UA-IX and 4 Gbit/s to the world from several backbone operators. Further development The hosting services market can be traced already in our history:

Since we began to provide hosting services in 2006 among users of our resource, and since July 2009 we separated the services into a separate project - ua-hosting.com.ua, which in the future reached the international level and completely moved abroad and is now known under the ua-hosting.company brand and is available via the short domain http://ua.hosting.

It is worth noting that over the past 10 years the market has undergone enormous changes, and the reason for this is not only a significant reduction in the cost of main channels, but also a redistribution of the audience among many projects due to the closure of once popular projects. Such successful resources as file hosting services, which occupied the top places in terms of traffic in the Alexa ranking, faded into oblivion for many reasons, but mainly due to the ongoing war with copyright holders.

So in Ukraine, the once famous ex.ua, generating over 15% of the total traffic of the Ukrainian exchange point UA-IX (essentially a traffic exchange point in Kiev, since regional operators were rarely represented, especially with the advent of the exchange point Giganet and DTEL-IX) was closed after the closure of the no less famous fs.to, which at one time bought 100 Gbit / s in the Netherlands from us. And the case with the once famous megauload was even more resonant, when over 600 servers of this file hosting service were seized from the data center in the Netherlands where we are located. Rutracker was blocked in the Russian Federation by Roskomnadzor, and torrents.net.ua ceased to exist in Ukraine due to fear of reprisals.

The audience went to Youtube, Instagram and other social networks. networks. Sites for an adult audience, perhaps, have not lost their popularity, only now making money on teaser advertising for our webmasters from the Russian Federation and Ukraine has lost all meaning due to the prices of advertising and paying for foreign channels, the price of which, by the way, has decreased significantly in comparison even with 2012, when it seemed that it couldn’t get any cheaper, it became quite problematic.

The situation on the backbone market, which determines the relative cost of traffic delivery

As we could understand, having read the information presented above, the price of Internet traffic depends on the point to which the traffic needs to be delivered, how popular this direction is, at what speed the data needs to be transferred to the stream and with what delay. And also the price will depend on which communication channels the traffic will pass through, which determines how direct the route is and what the priority of the traffic will be in a given case, which in turn will determine the final latency (ping) value from one point to another.

For example, 10 Gbit / s from Novy Urengoy to Norilsk will clearly cost not $2000 / month or even $6000 / month, since over $40 million was invested in the construction of fiber-optic communication lines. The project is proposed to be repaid in 15 years, so the monthly cost of the channel is 40 Gbit/s is 40/15/12 = $0.22 million or $55,000/month for 10 Gbit/s and this is not yet a channel to the Internet, but only the cost of delivering traffic over a high-quality fiber-optic line between two remote localities. And this money must now be taken from the population of Norilsk, who will watch the same Youtube (traffic to which will cost even more, since it will be necessary to pay highways for delivery up to Youtube networks), which means the traffic from it will be quite expensive and the activity of the population there will be restrained by this price. There is an option when Youtube may want to be “closer” to its users and will want to pay part of the costs for the channel to them instead, in which case the cost of access to the Youtube resource for the population of Norilsk may become lower. This example clearly demonstrates what the price of access to specific Internet resources can consist of. Someone always pays for your traffic, and if it’s not you, then it’s either advertisers and resources that generate this traffic, or a backbone provider or simply an Internet provider that benefits from traffic from this direction (say, in order to get discounts on other directions or any tax benefits, which may be beneficial in the case of Norilsk or simply because a fairly wide channel was purchased to receive a discount on traffic delivery and it is idle).

Tier I backbone operators, such as Cogent, Telia, Level 3, Tata and others, are distinguished by the fact that they charge money for delivering traffic from everyone who is connected to them, because traffic generators are afraid to exchange traffic with providers where their audience is located directly. Thus, situations arise when so-called peering wars are generated, including between first-level trunk operators and large generators, when priority is given to specific consumers, while for others the price of cooperation may be artificially inflated in order to crush a competitor, or or simply for the purpose of enrichment, since the traffic generator simply has no other options. Therefore, disputes arose very often, including litigation, since some companies did not maintain net neutrality and tried to do it in a very veiled manner.

Thus, the dispute between Cogent and Google regarding traffic over the IPv6 protocol has still not been resolved, which is why it is simply impossible to peer between companies for direct exchange. Cogent demands money from Google for traffic on their network, but Google wants to feast for free, since the mass of Cogent subscribers (data centers, home Internet providers) are active consumers of traffic from Google networks, although, by the way, IPv4, not IPv6, which would reduce latency and reduce the cost of traffic for these subscribers, subject to an increase in the % of IPv6 traffic. But this is apparently unprofitable for Cogent, since it is a tier I backbone provider, and external traffic from its networks is paid for by tier 2 backbone providers (they pay tier 1 backbone providers and make a profit from tier 3 providers) and even tier 3 (they pay tier 2 providers and get money from end clients).

In order to understand what the final price of traffic for a resource consists of, let’s consider the situation using an example popular service Cloudflare, the essence of which is to make websites “closer” to their audience, help reduce the load on the infrastructure by caching static information and filtering out possible DDOS attacks.

Of course, Cloudflare places servers in all regions where there is demand for traffic, that is, almost all over the world. And in order to save on traffic, it tries to enter into peering agreements with regional providers that are able to deliver traffic from Cloudflare to users for free, bypassing expensive Tier I backbone operators that charge for traffic in any case. Why is this beneficial for local providers? With significant volumes of traffic, they need to pay, just like Cloudflare, Tier I operators significant amounts of money for delivering traffic, it is much more profitable to connect your channel “directly” (invest once in construction) and receive traffic for free, rather than paying a lot of money every month to the backbone operator. Even in cases where direct peering is not possible, it may be more profitable to connect through the networks of other transit providers, where the cost of traffic will be much lower than the cost of traffic when transmitting through Tier I. Yes, the route becomes not very direct, the ping may increase slightly, and the transfer speed may drop slightly per flow, but the quality may still be acceptable to realize such savings.

But it is not always possible to conclude peering agreements; nevertheless, in some regions Cloudflare is forced to buy a fairly large percentage of connectivity from backbone providers, and the price of traffic varies greatly, depending on the region. Unlike some cloud services, such as Amazon Web Services (AWS) or traditional CDNs, which often pay for traffic by the terabyte level, Cloudflare pays for the maximum use of a channel over a certain period of time (the so-called “traffic flow”), based on the maximum number of megabits per second that are used during a month by any of the backbone providers. This accounting method is called burstable, and a special case is 95th percentile. The 95th percentile is a method used to provide flexibility and batch usage of channel width. This allows the service consumer to exceed the bandwidth established by the tariff by 5% of the total time of using the channel, without increasing the cost. For example, if your tariff involves using a bandwidth of 5 Mbit/s, then the maximum permissible channel width may be exceeded by 36 hours every month (5% of 30 days). The used bandwidth is measured and recorded every 5 minutes for a month, as an average value over this small five-minute period of time. The used bandwidth in each time interval is measured by dividing the amount of data transmitted during the interval by 300 seconds (the duration of the specified interval). At the end of the month, 5% of the maximum values ​​are removed, and then the maximum number is selected from the remaining 95% and it is this value that is used to calculate the payment for channel width.

There is a legend that in early days Since its inception, Google has used 95 percentile contracts to perform indexing at very high throughput during one 24-hour period, and the rest of the time the traffic consumption intensity was much lower, thereby providing significant savings on the cost of consumed channels. A smart, but undoubtedly not very durable strategy, since subsequently we still had to build our own data centers and even channels in order to index resources more often and pay less for intercontinental transport traffic.

Another “subtlety” is that you usually pay backbone providers for the predominant traffic (incoming or outgoing), which in the case of CloudFlare allows you to completely pay for incoming traffic. After all, CloudFlare is a caching proxy service, as a result of which the output (out) usually exceeds the input (in) by about 4-5 times. Therefore, bandwidth is charged exclusively based on the values ​​of outgoing traffic, which allows you to completely avoid paying for input. For the same reason, the service does not charge additional fee when the site comes under a DDOS attack. The attack will certainly increase the consumption of incoming traffic, but unless the attack is very large, the incoming traffic will still not exceed the outgoing traffic and therefore it will not increase the cost of the channels used.

Most peer-to-peer traffic is usually free, which cannot be said about traffic from the Netflix service, which, after much debate, was forced, after much debate, to pay Verizon and Comcast for peer-to-peer inclusions in order to provide the possibility of acceptable video streaming for users from the networks of these operators.

In the diagram above, we can see how the number of free Cloudflare peering inclusions is growing over the course of 3 months, both with IPv4 and IPv6 versions of the Internet Protocol. And below, also within 3 months, we can observe the global growth of Cloudflare peering traffic, which currently uses over 3,000 peering inclusions and saves about 45% of the cost of paying for expensive backbone transit traffic.

Cloudflare does not disclose exactly how much it pays for transit backbone traffic, but provides comparative values ​​from different regions from which we can draw an approximate conclusion about the amount of costs.

Let's look at North America first. Let's assume that we take a blended average across all transit providers of $10 per Mbps per month as a North American benchmark. In reality, the payment will be less than this amount and will depend on volumes, as well as on the selected backbone operator, but it can serve as a benchmark for comparing costs with other regions. If we accept this figure, then each 1 Gbps will cost $10,000 per month (again, do not forget that this value is higher than the actual value and is typical, as a rule, in the case of a retail purchase, this is just a benchmark figure that allows you to understand the difference).

The effective price for bandwidth in a region will be a mixture of transit costs ($10 per Mbps) and peering ($0 per Mbps). Every byte transmitted over a peering is a potential transit byte for which you do not have to pay. While North America has some of the lowest transit prices in the world, it also has lower average peering rates. The graph below shows the relationship between peering and transit traffic in the region. And while it has improved for Cloudflare over the past three months, North America still lags behind all other regions of the world in terms of peering.

Thus, while the percentage of Cloudflare peering traffic exceeds 45% globally, in the North American region it is only 20-25%, which makes the effective cost of 1 Mbit / s without discounts equal to $7.5-8 per Mbit/s. As a result, North America is the second region in the world ranking of regions with the cheapest traffic. But where is the cheapest traffic?

We have already looked at Europe, where for a very long time, due to the historical concentration of the population in certain regions, there have been a lot of exchange points, it is due to this that it is possible to obtain a larger % of peer-to-peer traffic, and, as a result, the cheapest traffic in the world, since % Transit traffic from the same Cloudflare is at the level of 45-50%.

As a result, our reference cost drops to $4.5-5 per Mbit/s or less. The % of peering traffic depends directly on the number of participants in the largest exchange points in Europe - AMS-IX in Amsterdam, DE-CIX in Frankfurt and LINX in London. In Europe, traffic exchange points are mostly supported non-profit organizations, while in the USA, exchange points are mainly commercial, the same Equinix in New York, which significantly affects the number of participants in these exchange points and, as a result, peerings who are willing to sign peering agreements. For comparison, in Amsterdam, according to statistics for 2014, there are about 1,200 participants, while in the USA there are only 400.

The ratio of peering to transit traffic in the Asian region is approximately the same as in Europe; Cloudflare shows a graph with a value close to 50-55%. However, the cost of transit traffic is 6-7 times higher compared to the reference cost of traffic in Europe and amounts to as much as $70 per Mbit/s. Thus, the effective cost of traffic is in the range of $28-32 per Mbit/s, which is 6-7 times higher than in Europe.

In general, the cost of transit traffic in the Asian region is higher due to several factors. The main thing is that there is less competition in the region, as well as more monopolistic providers. Secondly, the market for Internet services is less mature. And finally, if you look at a map of Asia, you will see a lot of things - water, mountains, inaccessible sparsely populated regions. Running submarine cables is significantly more expensive than laying a fiber optic cable in the most inaccessible region, although this is also not cheap, so the cost of intercontinental transit and the cost of transit in remote areas is offset by the cost of transit within the rest of the continent, which is artificially inflated to cover the costs for external and internal “remote” connectivity.

Latin America became a new region in which Cloudflare's presence appeared and within 3 months after that, the percentage of peering traffic increased from 0 to 60%.

However, the cost of transit traffic, as in Asia, is very high. Transit traffic costs 17 times more than transit traffic in North America or Europe, and the effective cost is $68 per Mbps, which is twice as high as in the Asian region, despite the fact that the percentage of peering traffic here is one of the best in the world. Problem Latin America Another thing is that in many countries there are no data centers with a “carrier neutral” policy, when participants can freely commute and exchange traffic with each other. Brazil turned out to be the most developed in this regard, and after Cloudflare built its own data center in Sao Paulo, the number of peerings increased significantly, which made it possible to reach 60% of the exchange traffic value.

The most expensive for Cloudflare is, perhaps, traffic in Australia, since it involves great amount underwater highways for traffic delivery. And although the percentage of peering traffic within the region reaches 50%, Telstra, a monopoly operator in the Australian telecommunications services market, does not allow reducing the cost of transit traffic within the country below $200 per Mbit / s due to the large distribution of the population across the region, which is 20 times higher than the reference value in Europe or the USA. As a result, the effective cost of traffic in 2015 is at $100 per Mbit/s and is one of the most expensive in the world. And the cost of transit traffic is approximately the same amount as Cloudflare spends on traffic in Europe, although the population of Australia is 33 times smaller (22 million versus 750 million in Europe).

Interestingly, in Africa, despite the high cost of transit traffic - around $140 per Mbit/s, Cloudflare managed to negotiate peering with 90% of providers, resulting in the effective cost of traffic being $14 per Mbit/s. Thanks to this, websites from London, Paris and Marseille began to open quite quickly, and the presence in West Africa was increased, providing faster access to European resources for residents of the same Nigeria, where there are about 100 million Internet users. And in the Middle East region, the percentage of peering traffic reached 100%, which made it the cheapest for CloudFlare in the world, if you do not take into account the costs of building and supporting data centers.

Just 2 years later, at the end of 2016, prices in the most expensive region - Australia and Oceania - became 15% lower, which made it possible to obtain a traffic price of $85 per Mbit/s. Thus, in the case of Cloudflare, the statistics became something like this:

It is interesting that today, there are 6 of the most expensive backbone providers - HiNet, Korea Telecom, Optus, Telecom Argentina, Telefonica, Telstra, traffic from which costs Cloudflare an order of magnitude more than traffic from other connectivity providers around the world, and which they refuse to discuss lower transit prices. For the same Cloudflare, the total traffic to these 6 networks is less than 6% of total consumption, but almost 50% of the funds allocated to pay for total connectivity were paid for traffic from these 6 most expensive networks. Of course, this could not continue forever and Cloudflare decided to redirect the traffic of its “free” users to more remote data centers (in Singapore or Los Angeles), instead of a presence in Australia and New Zealand, where the cost of external channels is more reasonable, in Ironically, Telstra ended up paying more as their undersea backbone became busier after the move, which could be a good sign for lower prices for the likes of Cloudflare in the region.

Absolute values ​​of traffic delivery costs for transit providers in different regions

Telegeography is a wonderful resource that helps you see not only the growth of undersea backbone networks, but also what prices are on average offered in the market for traffic delivery, calculating the median when using transit backbone providers in a particular region. In order to show you the current price order, I had to order a small report from them, as a result of data analysis in which the following results were obtained.

As you can see, the cost of the Dedicated Internet Access (DIA) service is at its maximum level in Mumbai, Sao Paulo and Caracas at the moment. While in Europe and the USA this cost is at $6 and $8 per Mbit/s, respectively.

In accordance with this, a logical question arises: how, at such prices, is it possible to rent out servers nowadays, say, configurations of 2 x Intel Dodeca-Core Xeon E5-2650 v4 128GB DDR4 6 x 480GB SSD with a 1Gbps channel and a traffic limit of 100 TV at a price of $249/month, which is equivalent to 300+ Mbit/s of real consumption, if the cost of 1 Mbit/s is at $6/month per megabit on average, based on the current report?

How can traffic in data centers be sold cheaper?

Large data centers, such as EvoSwitch in Amsterdam, where we are located, have their own developed backbone fiber network, which allows significant savings on the delivery of traffic to certain regions, plus increases the number of possible peerings. In 2017, our partner became the owner of one of the largest and highest quality networks.

As we can see from the illustration, the total network capacity has reached 5.5 Tbit/s, 36 points of presence have appeared around the world, over 2000 peering connections, connections to 25 traffic exchange points directly. All this, of course, affects the effective cost of traffic, which, as we remember, is the sum of the cost of paid backbone transit connectivity and free peering, and can also be reduced by charging a fee for peering connections from the traffic consumer provider. That is, traffic can be paid not only by the traffic generator, but also by the recipient - the provider into whose network this traffic is generated and who is interested in organizing peer-to-peer inclusion in order to pay less to backbone providers and save on traffic according to the same scheme as data saves -center. Among other things, a data center almost always has the opportunity to sell excess “incoming” Internet channel to home Internet users, who mainly need just such traffic to access the Internet and which, in fact, is unused in most data centers.

However, even such an extensive network does not make it possible to keep the cost of traffic infinitely low. Therefore, no matter what guaranteed conditions for traffic the data center offers, you need to understand that the final low cost for traffic is achieved by selling bandwidth with a reasonable degree of oversell, that is, selling more connectivity than it actually is, but with strict taking into account the real needs of data center users for traffic in a guaranteed case, when each user is provided with his guaranteed bandwidth at the moment of time he needs. Moreover, the more you can save on traffic, the more users are served, and the more, the more peering and backbone channels are connected to the network.

Let's look at an example. 10 users need a guaranteed 100 Mbit/s channel to their servers, but they do not always use 100% connectivity and quite often not at the same time. After analyzing real consumption, it turns out that at the same time all ten users consume no more than 300 Mbit/s of traffic in peaks and purchase 1 Gbit/s of dedicated bandwidth, and taking into account reservations - 2 Gbit/s from different operators and charge each user for a dedicated channel in full volume (essentially double) becomes impractical. It is much more reasonable to purchase three times less - 700 Mbit / s of traffic, if the purchase is carried out from two independent backbone operators, which will help provide a dedicated channel of 100 Mbit / s to each of 10 clients with the specified level of consumption, and even with a double degree of fault tolerance, plus there will even be about 100 Mbit/s left for “growth” in case someone’s traffic consumption starts to increase, which will give time to connect additional channels. If the traffic is already supplied by three independent backbone providers, then the purchase becomes even more profitable, since it is enough to buy only 500 Mbit of bandwidth, or even less, because at the same time, with a high degree of probability, only one channel out of three can fail - no more than 166 Mbit /s connectivity when there is a need for a maximum of 300 Mbit/s. This way we can easily get 334 Mbit/s of bandwidth at any time, which will be sufficient for the needs of our subscribers, even if there is a failure on one of the uplinks.

In reality, the situation is much simpler and the degree of fault tolerance and redundancy is higher, since there are often not ten, but tens of thousands of clients with a 100 Mbit/s channel. And most people use very little traffic. So, 1000 servers with a 100 Mbit/s channel without taking into account according to our statistics consume on average only 10-15 Gbit/s in peaks, or even less, which is equivalent to 10-15% of the bandwidth allocated to them. At the same time, everyone is provided with the opportunity to consume 100 Mbit / s at the time they need without any accounting, and a great variety of backbone operators, from which the channels are established, are used. There are, of course, even more peer-to-peer connections, which makes connectivity often cheaper and of higher quality and eliminates the possibility of losing a huge part of connectivity at once. Due to this, the required percentage allocated to ensure fault tolerance is reduced from 50 to 5% or less. Of course, there are clients who load their channels “on the shelf”, but there are also those who consume extremely little traffic, while renting a dedicated server with 100 Mbit/s channel without accounting, since it’s so convenient - no need to be afraid of expensive fees for excess or simply subscribers do not understand how much traffic they really need and how to calculate it. In fact, those users who do not consume all the bandwidth allocated to them pay for the traffic of users who use the channel to the fullest.

Among other things, you also need to remember about the daily distribution of traffic to Internet projects, which also has an impact on reducing the cost. Since, if you have a channel load of 100% in the evening, at the time of maximum visits to your resource, then at the rest of the day the channel load will most likely be much lower than 100%, up to 10-20% at night and the free channel can be used for other needs (we are not considering generating traffic to another region, since in this case there will most likely be an expensive payment for transport). Otherwise, visitors during peak hours will begin to experience problems, leave the website, and traffic will inevitably fall due to worsening behavioral factors and a lower position of the resource in search results if the project’s traffic is mainly search traffic.

In the case of gigabit connections, of course, the % of channel utilization was higher than 10-15% in the initial period of the offer’s existence and could reach 50% or more, since such servers were previously ordered by traffic generator subscribers when 100 Mbit/s from the port was not enough for them , and the gigabit port was much more expensive and there was no point in ordinary users paying for it when there was no real need for it. These days, when it has become possible to get 1 Gbps and even 10 Gbps at home, and the difference between the cost of a switch that supports 1 Gbps and 100 Mbps has become negligible, it turns out to be much more profitable to give everyone access to channel 1 Gbit/s, even when he doesn’t really need it, rather than limiting the bandwidth. Only so that the client can pump up the amount of information he needs as quickly as possible and, as a result, much more quickly free up bandwidth for the next subscriber in cases where he does not need to generate traffic constantly. That is why the percentage of traffic utilization for servers with a 1Gbps channel and a 100TB limit in reality turned out to be much less than 10%, since most users, of course, do not need such a channel all the time and free up the channel 10 times faster for use by subsequent subscribers.

Applying this principle of providing Internet channels, there is a clear need to monitor traffic consumption in individual segments of the data center network and even in each rack, so that as soon as someone’s need for a channel increases and the traffic reserve begins to decrease, it is possible to add an additional channel, thereby provide guaranteed “unlimited” for everyone. In general, thanks to this approach, significant amounts of money are saved on paying for external communication channels and it becomes possible to offer prices an order of magnitude lower than without applying this principle and even make money on traffic. After all, a data center cannot sell traffic at actual cost, but simply must earn money, as it spends time and money on maintaining the network, and a “healthy” business must be profitable.

Therefore, the oversell factor is present everywhere, to a greater or lesser extent, even when selling offers with a 10 Gbps Unmetered channel to dedicated servers, which, it would seem, should have consumed traffic in full. But the reality turned out to be different. We sold at one time over 50 dedicated servers with a 10 Gbps Unmetered connection to each, however, our total generated traffic barely exceeded 250 Gbps, despite the fact that this channel was used by another 900+ dedicated servers with a 100 Mbps and 1 Gbps connection to each server Thanks to this, we were able to provide servers with a guaranteed 10 Gbps channel at incredible prices of $3000/month, and in the future - almost 2 times cheaper (starting from $1800 US).We were the first to sell connectivity at such low prices and that is precisely why they were able to generate so much traffic and get a lot of satisfied customers.

Today we are ready to go even further, thanks to cooperation with the Tier I backbone operator Cogent, we have the opportunity to sell excess connectivity in certain segments of their network in the Netherlands and the USA even cheaper - from $1199 for a server with a 10 Gbit/s channel excluding and from $4999 per server with 40 Gbit/s Unmetered channel.

https://ua-hosting.company/serversnl - you can place an order here, if you require a location in the USA - open a request in a ticket. However, the Netherlands is the optimal location for our regions in terms of connectivity.

2 x Xeon E5-2650 / 128GB / 8x512GB SSD / 10Gbps - $1199
2 x Xeon E5-2650 / 256GB / 24x512GB SSD / 10Gbps - $2099
2 x Xeon E5-2650 / 256GB / 24x1TB SSD / 10Gbps - $3599
2 x Xeon E5-2650 / 256GB / 24x2TB SSD / 10Gbps - $6599

2 x Xeon E5-2650 / 256GB / 8x1TB SSD / 20Gbps - $1999
2 x Xeon E5-2650 / 256GB / 24x512GB SSD / 20Gbps - $2999
2 x Xeon E5-2650 / 256GB / 24x1TB SSD / 20Gbps - $4599
2 x Xeon E5-2650 / 256GB / 24x2TB SSD / 20Gbps - $7599

2 x Xeon E5-2650 / 256GB / 24x512GB SSD / 40Gbps - $4999
2 x Xeon E5-2650 / 256GB / 24x1TB SSD / 40Gbps - $5599
2 x Xeon E5-2650 / 256GB / 24x2TB SSD / 40Gbps - $8599

In individual cases, we are able to offer reasonable prices for 100Gbps Unmetered connections, please contact us if you need such connectivity.

Of course, Cogent and I expect that you will not consume all the bandwidth allocated to you, and the proposed configurations also contribute to this. If they are used with a RAID controller, consuming more than 6 Gbit/s of traffic is very problematic, since we get a “bottleneck” in the form of a controller. However, if you use the drives independently, it becomes possible to distribute traffic in an optimal way. In any case, we ensure the possibility of consuming the specified band, regardless of our hopes with Cogent. Moreover, they sell excess connectivity, which would simply sit idle if not sold. We must also not forget that Cogent, as a backbone provider, takes money from everyone. Thus, the traffic that you generate will, in any case, be additionally paid for by the provider in whose network this traffic will arrive.

However, you should not expect that by purchasing a server with such a channel, you will have 10, 40 or 100 Gbit/s per stream; this is simply impossible for that kind of money, and is often not necessary. Streaming transport at high speed point-to-point can cost a lot of money, in some cases $55,000 for 10 Gbit / s, as in the case of the Novy Urengoy - Norilsk fiber optic line, which we discussed above. But the fact that excellent connectivity as a whole with the Internet will be ensured is unambiguous. The average speed per stream for most projects is enough to be over 10 Mbit/s, which allows you to host projects with video streaming in Ultra HD quality and ensure that 1000-4000 people can watch “online” from one server.

However, in some cases the transmission rate per stream can be significant even with a small channel fee. So last year, 10 Gbit/s home Internet became widespread in the United States, when for a modest fee of $400/month it became possible to get such an “unlimited” channel at home.

In such cases, home routers that provide access to the network via Wi-Fi often turn out to be a “bottleneck” (capable of providing a connection of up to 300 Mbit / s), as a result of which it is necessary to reuse a wired connection and even install servers at home, as well as use productive computers and storage devices in them, so as not to run into their capabilities when using the channel. Why is this necessary? Many people nowadays work with data from home. James Busch, a radiologist from the USA, analyzes patient data from home and new channel saves him a significant amount of time.

“The average x-ray contains about 200 megabytes of data, while PET scans and 3D mammography can take up to 10 gigabytes. Thus, we are forced to process hundreds of terabytes of data. We calculated that we saved an average of about 7 seconds per study using a 10 Gbps connection instead of a gigabit one. It would seem that this is not much, but if you multiply it by the number of studies that we conduct per year, which is 20-30 thousand, it turns out that we save about 10 days of productive work just because we have improved the connection speed by an order of magnitude.”

Thus, if you need high speed per flow at minimum costs- you need to place your 10, 20, 40 or 100 gigabit server as close to the users as possible. Then it is likely that you will be able to generate traffic to some segments of the Internet at a speed of 1 and even 10 Gbit / s per stream.

Our time opens up unique opportunities for you for new achievements. Now you can hardly say that some hosting service or dedicated server rental is too expensive, and starting your own business or project has never been so easy. The most powerful server configurations are now available, the capabilities of which exceed the capabilities of servers ten years ago by up to three orders of magnitude in some cases, and at prices they are not much more expensive than hosting in 2005. Everyone can afford to be real. Traffic has become thousands of times cheaper, and the speed of channels is higher. And it will depend only on you how you manage them. Anyone can come up with an interesting Internet project, stop wasting your time. Rent a dedicated server or at least a virtual one and start working today, even if you don’t need it yet and you don’t know anything about it - it will serve as a good motivation to move forward. Use these opportunities to make our world a better place. Even if you have never had experience in web development and creating Internet projects, it is never too late to start, I once started from 0 and generated more traffic than the whole of Belarus! I hope my experience will be useful for you. We are building the Internet, join us!

BLACK FRIDAY CONTINUES: 30% discount on first payment using promo code BLACK30% when ordering for 1-6 months!

These are not just virtual servers! These are VPS (KVM) with dedicated storage, which can be no worse than dedicated servers, and in most cases - better! We have made VPS (KVM) with dedicated drives in the Netherlands and the USA (configurations from VPS (KVM) - E5-2650v4 (6 Cores) / 10GB DDR4 / 240GB SSD or 4TB HDD / 1Gbps 10TB available at a uniquely low price - from $29 / month , options with RAID1 and RAID10 available), don’t miss the chance to place an order for a new type of virtual server, where all the resources belong to you, like on a dedicated one, and the price is much lower, with much more productive hardware!

How to build corporate infrastructure class using Dell R730xd E5-2650 v4 servers costing 9,000 euros for pennies? Is Dell R730xd 2 times cheaper? Only here 2 x Intel Dodeca-Core Xeon E5-2650v4 128GB DDR4 6x480GB SSD 1Gbps 100 TV from $249 in the Netherlands and the USA!

Tags: Add tags

Gigabit Internet at home - and what to do with it? We test the ultra-high-speed network and look for its weak points

The Internet is expensive, the speed is low - such complaints will probably always appear. Having assessed both aspects, one can argue: prices in Belarus for network access, taking into account all the features, are quite acceptable. Well, what about speed?.. For some, a megabit is enough, for others, even 100 will not be enough. It all depends on the needs, and modern content cannot be called “light”, and its providers do not care too much about the “width” of the channel. As an experiment, we asked the Internet operator Atlant Telecom to provide home gigabit Internet - to understand whether Belarusians need the notorious 1 Gbit/sec.

What connection speed is considered comfortable? In some countries, 5 megabit Internet is considered the “social” minimum. Leadership remains for a long time South Korea with an indicator of 28.6 Mbit/s, the global average is 7.2 Mbit/s. For comparison, in Belarus, according to a report by Akamai, the average speed is about 9.7 Mbit/s, and our country is in the fifth ten in the world ranking, which is a good indicator.

But what is the mythical gigabit Internet? Mythical for a simple user who does not know what a data center, Internet of things, big data and so on are. That is, for 95% of Belarusians. In principle, it may already be available to Belarusians today, but for some reason telecom operators do not offer such tariffs, or the supply is limited. Although a few years ago there was at least one option.

Connection

Before connecting, I had been using a tariff with a 50-megabit connection (100 Mbit/s in peering) for a long time. The pros and cons of such a connection are familiar to many: torrents can take over the entire allocated channel, but IPTV and games do not suffer much - the speed is enough for everything to work simultaneously.

Actually, the transition to an (even more) high-speed connection consisted of laying a new cable directly from the operator’s equipment, replacing the twisted pair in the apartment itself and the router - and the speed increased 20 times. Then several surprises awaited us.

The first one was presented by the popular Speedtest. When I tried to measure the connection speed, the operator equipment sent me to a “ban” (due to the peculiarities of the Speedtest algorithms). It took some time to solve the problem - the provider reconfigured the hardware.

Now that some incredible values ​​have appeared on the “speed test” board, it’s time for the second surprise: it turns out that not every server in Belarus is capable of “raising” that same gigabit. Well, let's try foreign ones...

The server refused to measure the speed - either it “got down” or sent it to “ban”

Only some measurement results are given, and the Yandex service did not want to accelerate

Some remote hosts were sensitive to the load, blocking access, but nevertheless the speed varied from 450-550 Mbit/s in the USA (Cupertino) to 930 Mbit/s in the Russian direction, and Europe - Germany, France, Poland - was approximately in the middle .

Synthetic tests are completed, we need to try something closer to reality. We look for files on the p2p network, and then launch Steam. In the first case, the most popular file allowed us to approach the mark of 41 MB per second. This is not the limit, but the result is indicative - it does not reach the maximum.

A game with a capacity of about 30 GB was selected from the Valve service. We click “Install”, but the speed does not rise above 330 Mbit/s (41 MB per second). What's that rustling under the table? This is the bottleneck - a hard drive that has exhausted its capabilities. We select SSD in the settings, and the speed increases to 64 megabytes per second (about 512 Mbit/s).

There is Internet, but no speed

What conclusions can be drawn? In the operator’s area of ​​responsibility, everything is in order - the gigabit is connected to the router, but then the “plugs” begin. The main reasons for the reduced speeds are obvious: a hard drive that is unable to record data (a budget SSD may also not cope with the task), the general performance of the computer, insufficient speed of file upload by the source (it can be limited by the remote side programmatically).

If the user has his own router, it is possible that this device will also turn out to be a weak link - we are talking about its processor and ports. In addition, in a gigabit device, the Ethernet ports may well be 100 megabit. Well, the seemingly banal reason is the wires. The old or cheap twisted pair cable, which is found in many houses under and above the baseboards, is laid with 4-core cables, but no matter how much you beat a tambourine, it will not handle gigabit. With wireless connections it's even more complicated.

“How does this happen? You buy a router that says “gigabit”, but this does not mean that this speed will always be available everywhere. Usually we are talking about speed between LAN ports, but it may not be between LAN and WAN ports. Therefore, operators have recommended and tested models with guaranteed performance.

There's even more marketing in wireless. A simple example: the inscription “300 Mbit/s” or “1100 Mbit/s” for Wi-Fi,”- the head of the department for fixed line velcom Oleg Gavrilov. The cable implements duplex communication with the same performance in each direction.

Wi-Fi works differently, and 1100 Mbps means the speed will be split roughly equally. In addition, at speeds greater than 300 Mbit/s, parameters are indicated for two ranges, which are summed up. “And honest manufacturers also note next to the speed indicator that the data was obtained in laboratory conditions, where there is no interference,”- added Oleg.

What else affects data transfer speed? LAN ports practically do not process information (more precisely, the processor is minimally involved), and the WAN turns out to be much more demanding on the performance of the device - the router. Accordingly, the question of price arises - the more powerful the processor, the higher it is, even with other “ordinary” characteristics.

“Next is the terminal device: laptop, computer, TV, set-top box. It's 2017, and gigabit network cards are available in all more or less modern PCs. There are nuances with other devices, especially if the same laptop has a “lean” mobile processor installed.”

Much, if not all, depends on what the user is doing online. When surfing, it will be unrealistic to utilize even part of the same 100 megabits - 5 is enough. If you watch videos, download files, play online games, then 50 Mbit/s is more than enough. But here we are talking not only about the data transfer speed, but also about the capabilities of the same computer and codecs: “You want to watch 4K over the Internet, but it doesn’t work or switches to Full HD? It turns out that the subscriber device simply does not support such content.” Practice has shown that YouTube delivers 4K content without delays (with 50 megabit tariffs, you often had to wait for loading). It’s the same with 8K, but the computer can’t handle it anymore, showing slide shows.

From a technical point of view, a channel of 50 Mbit/s will be sufficient for streaming 4K content - with correctly constructed routes. Today, in households, as a rule, there is a single consumer of ultra-high definition video - the television. And only a few of them. Well, and monitors, of which there are probably more, but the advantages of 4K films, which you won’t find during the day, are lost on a small diagonal. However, over time, the need for them will develop.

Loading - 5%

Based on Internet usage patterns, even with a gigabit connection, the user behavior pattern will practically not change: you can play around with tests, download a couple of games and movies, and then return to your usual way of life (“jocks” and organizers of “home networks” are not taken into account ).

Oleg Gavrilov also agrees with us: “Now it’s no longer fashionable to download “in reserve.” Everything can be viewed online.”

Objectively, this is true, but even without it, Internet consumption in my case did not increase. Of course, in the first days the traffic showed new records - while using the gigabit channel, I downloaded only 48 GB more than usual. And this is due to enhanced tests. Then traffic consumption gradually dropped to its previous values.

Today, large Belarusian operators providing Internet access are increasingly looking towards GPON technology (unlike Ethernet, this means “optics to the apartment” and not “fibers to the house”). It has greater capabilities and, among other things, does not require regular replacement of passive infrastructure as speeds increase.

It is logical to assume that with the spread of 4K and virtual content in Belarus, the need for speeds will also increase. But for now, Belarusians will have to wait.