Why major Internet providers have abandoned high-speed tariffs. Testing an ultra-high-speed network and looking for its weak points

The cat and I “dispersed” the twisted pair cable that comes to the apartment a little

It is usually considered that the "ceiling" of DSL is 20-30 Mbps, FTTB (optical to the building plus twisted pair to the apartment) is 100 Mbps, and PON supports speeds above 100 Mbps. My colleagues and I decided to dispel the myths and “disperse” FTTB for a single provider.

First of all, we climbed into the attic in the house. Already, at least 1 Gb / s comes to each house, and the house node has a suitable gigabit “copper” port. That is, we can simply take and switch any apartment where our cable lies to a suitable port and provide speeds of more 400 Mbps.

Actually, we have already done this as part of beta tests, and the other day we launched services in Moscow with new speeds into commercial operation. Yes, you can most likely connect.

What was that, cap?

Our backbone and city networks have a reserve of free capacity that significantly exceeds the needs of customers even during the hours of the highest, maximum possible load. Take at least a holiday beloved by me and the cat New Year, in which those grandmothers who have already made friends with the Internet and Skype receive more congratulations.

What is the difference with PON

The fact that our FTTB network, which allows us to do all this, already exists. No modifications required. The cable is already in your apartment. All wires are. Optics goes to the nodes in the houses. You just need to take and switch the cable to another switch port. All! Such a channel is already approaching you, but you did not know about it. And PON needs to be built - this is a new infrastructure throughout the city. There is another ambush there - optical cores diverge from regional PBXs, which require proprietary client devices. And with us, with a regular FTTB network, you can use anything. Although there are few devices adapted for L2TP.

How it works

A hefty transport channel comes from the main ring to the level of the city. Further along the city, several large rings diverge. From them - smaller rings or mesh structures, "stars" are made at the level of entrances. Optical transport links go from the first level to the closet in the entrance. On the access level switch, we will re-switch the cable to a gigabit port ... and that's it, we now have a gigabit link to the client.

These are the results of my colleague without a cat, but with Wi-Fi (801.11 ac).

Technical capability

After I was able to test this at several points, we set up such links for all company employees working on the project. Pretty fast, I must say. There are almost no technical restrictions: the only thing is that there are literally a couple of dozen blocks in Moscow where it is necessary to upgrade the equipment a bit, but we are already working on it.

More restrictions

You will laugh, but we have not found ways to utilize such speeds. So, it turned out that such a channel can only be filled with very specific tasks - this is either CCTV from a bunch of cameras, or HD video for all family members at the same time, or the tasks of a photographer uploading pictures somewhere. Also, network drives. In general, with the exception of torrents, this is a provider's dream: once a day, the client makes a “whack” with a heavy file and is wildly happy.

But there were a number of other things that are directly related to marketing. First, almost all resources give content much slower than the channel allows. This is the eternal problem "why my Internet slows down, and tests show normal speed." Because we need resources that can provide a large number of clients with high-speed content delivery. So someone will have to part with illusions, not every resource meets these requirements.
100Mbps is already very fast for an Internet connection for the vast majority of users. Even higher speeds may be required by those who are forced to work with very large files.


Everything is correct in this photo, including the router

The link to the desktop or laptop should be copper - Wi-Fi, especially in the presence of interference from other networks, simply will not allow you to distribute the channel at such a speed. That's why best option- desktop on cable, tablets and phones - over the air.

End devices themselves can also cut traffic. Naturally, you must have a device that supports 400 Mbps (router or network card). True, a couple of surprises were revealed in the beta with the fact that not all devices can actually hold such traffic, despite statements about this.

Tests

Here begins the most interesting. We took 10 high-performance devices with L2TP support.

Gigabit is fast, especially for home use, so routers must be appropriate. I will say right away that it was not possible to cover and quickly test all models, so we focused on supporting gigabit connection, dual-band Wi-Fi and good feedback from users.

Our shortlist:
Asus RT-68U
D-Link DIR 825 D1
Pre-sale sample from the new manufacturer Totolink
Zyxel Keenetic Ultra
Apple Airport Time capsule

Once I've tested the devices against our checklists in the office, it's time to test the devices in the field to see the real performance of the device.

For this action, I tried to prepare thoroughly, I took a MacBook Pro 15 retina (late 2012) - the main working laptop, stuck a 128GB SSD into a separate desktop and connected the Asus PCE-AC68 Wi-Fi adapter there to the heap so that nothing would interfere with overclocking, just I took the Totolink A2000UA USB Wi-Fi adapter with 802.11ac support just in case. In addition, I captured the iPad mini, iPhone 5 and Samsung Galaxy note - we will test Wi-Fi on them.

To check the speed, in addition to the usual resources, such as speedtest, downloading files, I installed Iperf on one of our servers connected via a gigabit link to our core network. Everything seems to have gone something like this:

A little about the test methodology

In many reviews of routers that I have seen, stands with programs for generating traffic are usually assembled. We decided to do something different: to conduct testing in the same way as a subscriber who checks the speed of our Internet access will do.
The main instruments were:
1) Speedtest.net - nowhere without it
2) Mirror.yandex.ru
3) Iperf - some synthetics
4) YouTube

The list is small, but on these resources you can evaluate how fast Internet access works, so to speak natural product and no synthetics.

Let's get to the tests

First, let's see which of the Wi-Fi networks are already nearby.

"People's" range 2.4GHz - neither more nor less

5GHz - even got here, but there are not so many networks, two of them are ours

Asus RT-68U


Top router from Asus. The filling of the device inspires respect: a chip from Broadcom BCM4708A, 256MB RAM, 128MB Flash, support for 802.11ac and Beamforming is present.

Patch cord: speedtest showed 224Mbps for upload and 196Mbps for Upload

Good result, we continue to test, next in line is Iperf.

On this test, the unexpected happened. Either the router started to "fail", or iperf, but the results did not rise above 50Mbps. It's okay, let's look at more life test- downloading a file from Yandex.

Almost 35MB per second!

I ran the tests a few more times, then decided to clean the SSD anyway, at such speeds it quickly clogged.

Now let's take a look at how fast Wi-Fi works. A wireless network is a tricky thing, and many factors can affect the final performance. The laptop was located 4 meters from the router in a straight line.

Speedtest showed almost 165Mbps on Download and 166 on Upload. Worthy! Especially when it comes to the 2.4GHz band

Iperf showed similar values:

Let's switch to 5GHz now. Since the router can handle 802.11ac, but my work Macbook does not, I connected an external adapter that supports 802.11ac 2x2.

The connection was successful... Let's take a look at the speedtest:

209Mbps on Download 111 on Upload, most likely 210Mbps is the current ceiling for L2TP router performance. Let's hope that Asus will fix this in the new firmware.

Iperf showed results even lower:

D-Link Dir 825 D1

The next in line is the representative of the middle price range D-Link DIR825. The router is equipped with a Dual-Band Wi-Fi module, which is currently rare for the average price range. Let's see what this router is capable of.

Connection via patch cord

Let's move on to testing WiFi networks. The router has two Airgain antennas, so I expect high speeds over Wi-Fi as well.

For a network operating in the 2.4GHz band:

This frequency is maximally loaded, so such a result was expected in principle. How will 5GHz perform?

130-150Mbps. With detailed tweaking of the settings, it turned out that if you turn off the encryption of the Wi-Fi network, the performance increases. Of course, I did not discover America, but I did not deduce such a pattern on other routers.

Let's move on to the next test subject - Totolink

This router has similar characteristics with D-Link DIR 825, they are built on the same SoC - RTL8197D, but in this router the radio module supports 802.11ac. It will be interesting to evaluate its capabilities in real conditions.

Patch cord:

Um… okay, I'll leave it without comment.

We are getting closer to reality.

To be honest, I didn’t think that the “old man” RTL8197D was capable of pumping through L2TP through itself at such speeds. This makes the results of testing a Wi-Fi network more interesting.

"People's" frequency - 2.4GHz

Both speedtest and iperf showed almost identical results.
At 5GHz, the speed should generally be prohibitive! Maybe…

But no, although the connection showed that the connection was established at a speed of 867Mbps.

Iperf is trying to bring him down to the ground, and he is not bad at it.

Our latest marathon participant - Zyxel Keenetic Ultra

A popular model among L2TP devices. It "accelerates" well and works stably. We connect the patch cord and run the speedtest:

And I download the Fedora distribution kit, which has already become native, for the duration of the tests:

Unfortunately, this model from Zyxel does not support 802.11ac, so I will be content with 802.11n. Let's get started!

Let's look at 5GHz

Neither more nor less - the standard. This situation did not suit me, and I decided to connect a new Time Capsule with 802.11ac support (very conditional for the PCT model) to the router.

Here! It is a pity that manufacturers do not put the time capsule in the kit with their routers.

And if you measure the speed on the phone / tablet?

Most users, especially those not familiar with the methodologies various tests performance, and simply run the application on their phone. I will do that too.

Available was iPhone, tablet and Android phone. It makes no sense to test the connection on each of the routers, so I settled on the latest router model.

For 2.4GHz and 5GHz respectively, here we hit the performance ceiling of the Wi-Fi module on the phone. Devices on android showed approximately the same results, while on a tablet this speed was obtained when connected to a network at 5GHz, at 2.4GHz it will most likely be lower:

Well, tests on the street:

What happened?

The process of testing the new service was extremely captivating for the cat and me, so in the end we tested 10 routers from different manufacturers, price categories with different “stuffing”. Here are the most interesting ones:
  • Zyxel Keenetic Ultra
  • D-Link DIR825
  • Toto Link
  • Asus RT-68U
  • Zyxel Keenetic Giga II
  • TP-Link Archer C7 v.1
  • D-Link DIR 850L
  • Buffalo WZR-1759DHP
  • Netgear R7000 Highhawk
  • Arcadian
So if you have an SSD or HDD RAID at home, there is a good wifi router, and if you are solving specific tasks that require fast internet, then the new service will be useful to you.

Price

The service is new, the description will appear on the site in a couple of days. The cost is 1850 rubles per month, if without our router.

UPD, as requested in the comments:

Asus RT-68U D-Link DIR 825 D1 Toto Link Zyxel Keenetic Ultra
By cable (WAN-LAN)
speed test D: 224.2 Mbps U: 196.77 Mbps
D: 352.16 Mbps U: 370.3 Mbps D: 462.6 Mbps U: 255.8 Mbps D: 408.22 Mbps U: 275.59 Mbps
Iperf 26.3Mbps 354 Mbps 379 Mbps ~35MB/s ~43 MB/s ~50MB/s ~52MB/s
WiFi 2.4GHz
speed test D: 164.53 Mbps U: 165.68 Mbps D: 86.72 Mbps U: 57.82 Mbps D: 155.01 Mbps U: 118.62 Mbps D: 131.57 Mbps U: 113.53 Mbps
Iperf 140Mbps 52.5 Mbps 152Mbps 132 Mbps
WiFi 5GHz
speed test D: 209.4 Mbps U: 111.38 Mbps D: 148.27 Mbps U: 149.45 Mbps D: 233 Mbps U: 132.76 Mbps D: 185.4 Mbps U: 181.07 Mbps
Iperf 163Mbps 130 Mbps 192 Mbps 171 Mbps

I was in no rush to move my home network from 100 Mbps to 1 Gbps, which is quite strange for me since I transmit over the network a large number of files. However, when I spend money on upgrading my computer or infrastructure, I find myself getting an immediate performance boost in the applications and games I run. Many users like to amuse themselves with a new video card, a central processor and some kind of gadget. However, for some reason, network equipment does not attract such enthusiasm. Indeed, it is difficult to invest earned money in network infrastructure instead of another technological birthday present.

However, my bandwidth requirements are very high, and at one point I realized that the infrastructure for 100 Mbps was no longer enough. All my home computers already have integrated 1 Gb/s adapters (on motherboards), so I decided to take the price list of the nearest computer company and see what I need to convert the entire network infrastructure to 1 Gb/s.

No, a gigabit home network is not that complicated at all.

I bought and installed all the hardware. I remember that it used to take about a minute and a half to copy a large file over a 100Mbps network. After upgrading to 1 Gb / s, the same file began to be copied in 40 seconds. The performance gain was nice, but still I didn't get the 10x improvement I'd expect from comparing 100Mbps to 1Gbps throughput of the old and new networks.

What is the reason?

For a gigabit network, all its parts must support 1 Gb / s. For example, if you have gigabit network cards and appropriate cables installed, but the hub / switch only supports 100 Mbps, then the entire network will operate at 100 Mbps.

The first requirement is a network controller. It is best if each computer on the network is equipped with a gigabit network adapter (separate or integrated on the motherboard). This requirement is the easiest to meet, since most motherboard manufacturers pair recent years integrate gigabit network controllers.

The second requirement is that the network card must also support 1 Gb / s. There is a common misconception that Gigabit networks require Category 5e cable, but in fact, even old Cat 5 cable supports 1 Gbps. However, Cat 5e cables have the best performance, so they will be a better solution for gigabit networks, especially if the cable lengths are decent. However, Cat 5e cables are still the cheapest today, since the old Cat 5 standard is already outdated. The newer and more expensive Cat 6 cables offer even better gigabit performance. We will compare the performance of Cat 5e vs Cat 6 cables a bit later in our article.

The third and probably the most expensive component in a gigabit network is the 1 Gbps hub/switch. Of course, it is better to use a switch (perhaps paired with a router), since a hub or hub is not the most intelligent device, it simply broadcasts all network data to all available ports, which leads to a large number of collisions and slows down network performance. If you need high performance, then a gigabit switch is indispensable, as it forwards network data only to the desired port, which effectively increases the speed of the network compared to a hub. The router usually contains a built-in switch (with multiple LAN ports) and also allows you to connect your home network to the Internet. Most home users understand the benefits of a router, so a gigabit router is an attractive option.

How fast should a gigabit be? If you hear the prefix "giga", then you probably mean 1000 megabytes, while a gigabit network should provide 1000 megabytes per second. If you think so, then you are not alone. But alas, the reality is different.

What is gigabit? That's 1000 megabits, not 1000 megabytes. There are 8 bits in one byte, so let's just calculate: 1,000,000,000 bits divided by 8 bits = 125,000,000 bytes. There are about a million bytes in a megabyte, so a gigabit network should provide a theoretical maximum data transfer rate of about 125 MB / s.

Sure, 125 MB/s doesn't sound as impressive as a gigabit, but think about it: a network at that speed should theoretically transfer a gigabyte of data in just eight seconds. A 10 GB archive should be transferred in just a minute and 20 seconds. The speed is incredible: just remember how long it took to transfer a gigabyte of data before USB sticks became as fast as they are today.

Expectations were high, so we decided to transfer the file over a gigabit network and enjoy speeds close to 125 MB / s. We don't have any specialized miraculous equipment: a simple home network with some old but decent technology.

Copying a 4.3 GB file from one home computer to another ran at an average speed of 35.8 MB/s (we ran the test five times). This is only 30% of the theoretical 125 MB/s gigabit network ceiling.

What are the causes of the problem?

Picking up components for installing a gigabit network is quite simple, but getting the network to work at maximum speed is much more difficult. The factors that can cause a network to slow down are quite numerous, but as we have found, it all comes down to how fast hard drives are able to transfer data to the network controller.

The first limitation to consider is the gigabit network controller interface with the system. If your controller is connected via the old PCI bus, then the amount of data that it can theoretically transfer is 133 MB / s. For Gigabit Ethernet's 125 MB/s bandwidth, this seems to be sufficient, but remember that PCI bus bandwidth is distributed throughout the system. Each additional PCI card and many system components will use the same throughput, which reduces the resources available to the network card. Controllers with the new PCI Express (PCIe) interface do not have these problems, as each PCIe lane provides at least 250 MB/s of bandwidth, and is exclusive to the device.

Next important factor that affects network speed - cables. Many experts point out that in the case of laying network cables near power cables that are sources of interference, low speeds are guaranteed. Long cable lengths are also a problem, as Cat 5e copper cables are certified for a maximum length of 100 meters.

Some experts recommend running the new Cat 6 cable instead of Cat 5e. It is often difficult to justify such recommendations, but we will try to test the impact of the cable category on a small gigabit home network.

Let's not forget about the operating system. Of course, this system is rarely used in a gigabit environment, but it should be mentioned that Windows 98 SE (and older operating systems) will not be able to take advantage of gigabit Ethernet, since the operating system's TCP / IP stack can hardly load a 100-Mbps connection in to the fullest. Windows 2000 and later versions of Windows will work, although older operating systems will need some tweaking to get the most out of the network. We'll be using Windows Vista 32-bit for our tests, and although Vista doesn't have the best reputation for some tasks, it's a gigabit networking system from the get-go.

Now let's move on to hard drives. Even the old IDE interface with the ATA/133 specification should be enough to support a theoretical file transfer rate of 133 MB/s, while the newer SATA specification fits the bill as it provides at least 1.5 Gb/s (150 MB /With). However, while cables and controllers can handle transferring data at that speed, hard drives themselves can't.

Take for example a typical modern 500 GB hard drive, which should provide a constant throughput of about 65 MB / s. At the beginning of the platters (outer lanes) the speed can be higher, but as you move to the inner lanes, the throughput drops. Data on the internal tracks is read slower, at about 45 MB/s.

It seemed to us that we considered all possible "bottlenecks". What was left to do? It was necessary to conduct several tests and see if we could get the network performance to the theoretical limit of 125 MB / s.

Test configuration

Test systems Server system Client system
CPU Intel Core 2 Duo E6750 (Conroe), 2.66 GHz, FSB-1333, 4 MB cache Intel Core 2 Quad Q6600 (Kentsfield), 2.7 GHz, FSB-1200, 8 MB cache
Motherboard ASUS P5K, Intel P35, BIOS 0902 MSI P7N SLI Platinum, Nvidia nForce 750i, BIOS A2
Net Embedded Abit Gigabit LAN controller Integrated nForce 750i Gigabit Ethernet controller
Memory Wintec Ampo PC2-6400, 2x 2048 MB, DDR2-667, CL 5-5-5-15 at 1.8V A-Data EXTREME DDR2 800+, 2x 2048MB, DDR2-800, CL 5-5-5-18 at 1.8V
Video cards ASUS GeForce GTS 250 Dark Knight, 1 GB GDDR3-2200, 738 MHz GPU, 1836 MHz shader MSI GTX260 Lightning, 1792MB GDDR3-1998, 590MHz GPU, 1296MHz shader
Hard disk 1 Seagate Barracuda ST3320620AS, 320 GB, 7200 rpm, 16 MB cache, SATA 300
Hard disk 2 2x Hitachi Deskstar 0A-38016 in RAID 1, 7200 rpm, 16MB cache, SATA 300 Western Digital Caviar WD50 00AAJS-00YFA, 500 GB, 7200 rpm, 8 MB cache, SATA 300
power unit Aerocool Zerodba 620w, 620W, ATX12V 2.02 Ultra HE1000X, ATX 2.2, 1000W
network switch D-Link DGS-1008D, 8-Port 10/100/1000 Unmanaged Gigabit Desktop Switch
Software and drivers
OS Microsoft Windows Vista Ultimate 32-bit 6.0.6001, SP1
DirectX Version DirectX 10
Graphics driver Nvidia GeForce 185.85

Tests and settings

Tests and settings
Nodesoft Diskbench Version: 2.5.0.5, file Copy, Creation, Read, and Batch Benchmark
SiSoftware Sandra 2009 SP3 Version 2009.4.15.92, CPU Test = CPU Arithmetic / Multimedia, Memory Test = Bandwidth Benchmark

Before we get into any tests, we decided to test the hard drives offline to see how much throughput we can expect in an ideal scenario.

We have two PCs running on our gigabit home network. The first one, which we will call the server, is equipped with two disk subsystems. The main hard drive is a 320 GB Seagate Barracuda ST3320620AS a couple of years old. The server acts as a NAS with a RAID array of two 1TB Hitachi Deskstar 0A-38016 hard drives mirrored for redundancy.

We called the second PC on the network a client, it has two hard drives: both 500 GB Western Digital Caviar 00AAJS-00YFA, about six months old.

We first tested the speed of the server and client system hard drives to see what performance we can expect from them. We used the hard disk test in SiSoftware Sandra 2009.

Our dreams of achieving gigabit file transfer speeds immediately dissipated. Both of the single hard drives reached a maximum read speed of around 75 MB/s in ideal conditions. Since this test is conducted in real conditions, and the drives are 60% full, we can expect read speeds closer to the 65 MB / s index that we got for both hard drives.

But let's look at the performance of RAID 1 - the best thing about this array is that the hardware RAID controller can increase read performance by acquiring data from both hard drives at the same time, similar to RAID 0 arrays; but this effect is obtained (as far as we know) only with hardware RAID controllers, and not with software RAID solutions. In our tests, the RAID array provided much better read performance than a single hard drive, so chances are good that we will get high network file transfer speeds from a RAID 1 array. The RAID array delivered an impressive peak throughput of 108MB/s, but In reality, performance should be close to the 88 MB/s index since the array is 55% full.

Therefore, we should get about 88 MB/s over a gigabit network, right? It's not that close to the 125Mb/s Gigabit network ceiling, but much faster 100Mb/s networks have a 12.5Mb/s ceiling, so getting 88Mb/s in practice wouldn't be bad at all.

But not everything is so simple. The fact that the read speed from hard drives is quite high does not mean at all that they will write information quickly in real conditions. Let's run some disk write tests before using the network. We will start from our server and copy a 4.3 GB image from a fast RAID array to the 320 GB system hard drive and back. We will then copy the file from the client's D: drive to its C: drive.

As you can see, copying from a fast RAID array to drive C: gave an average speed of only 41 MB / s. And copying from drive C: to a RAID 1 array brought it down to just 25 MB/s. What's happening?

This is exactly what happens in reality: hard drive C: was released a little over a year ago, but it is 60% full, probably a little fragmented, so it does not break records. There are other factors, namely how fast the system and memory in general are. RAID 1 is made up of relatively new hardware, but due to redundancy, information must be written to two hard drives at the same time, which reduces performance. While a RAID 1 array can give you high read performance, write speed comes at a sacrifice. Of course, we could use a striped RAID 0 array, which gives high write and read speeds, but if one hard drive dies, then all the information will be corrupted. Overall, RAID 1 is more the right option if you value the data stored on the NAS.

However, not all is lost. The new 500 GB Digital Caviar is capable of writing our file at 70.3 MB/s (average over five test runs) and also tops out at 73.2 MB/s.

With all that said, we were expecting a maximum Gigabit transfer rate of 73MB/s from a NAS RAID 1 array to the client's C: drive in real-world conditions. We'll also test transferring files from the client's C: drive to the server's C: drive to see if we can realistically expect 40 MB/s in that direction.

Let's start with the first test, where we sent a file from the client's C: drive to the server's C: drive.

As you can see, the results are in line with our expectations. A gigabit network, theoretically capable of 125 MB/s, sends data from the client drive C: at the maximum possible speed, probably around 65 MB/s. But, as we showed above, the server drive C: can only write at a speed of about 40 MB / s.

Now let's copy the file from the server's high-speed RAID array to the C: drive of the client computer.

Everything turned out just as we expected. From our tests, we know that the C: drive of the client computer is capable of writing data at about 70 MB/s, and gigabit network performance was very close to this speed.

Unfortunately, our results don't even come close to the theoretical maximum throughput of 125 MB/s. Can we test the maximum speed of the network? Sure, but not in a realistic scenario. We will try to transfer information over the network from memory to memory in order to bypass any bandwidth limitations of hard drives.

To do this, we will create a 1 GB RAM disk on the server and client PCs, and then transfer the 1 GB file between these disks over the network. Since even slower DDR2 memory can transfer data at over 3000 MB/s, network bandwidth will be the limiting factor.

We got a maximum speed of 111.4 MB/s for our gigabit network, which is very close to the theoretical limit of 125 MB/s. An excellent result, there is no reason to complain about it, since the real throughput will still not reach the theoretical maximum due to the transfer of additional information, errors, retransmissions, etc.

The conclusion will be as follows: today the performance of information transfer over a gigabit network rests on hard drives, that is, the transfer speed will be limited by the slowest hard drive involved in the process. Having answered the most important question, we can move on to speed tests depending on the cable configuration, so that our article is complete. Could optimizing cabling bring network speeds even closer to the theoretical limit?

Since the performance in our tests was close to what we expected, we are unlikely to see any improvement when changing the cabling configuration. But we still wanted to run tests to get closer to the theoretical speed limit.

We ran four tests.

Test 1: default.

In this test, we used two cables about 8 meters long, each connected to a computer on one end and to a gigabit switch on the other. We left the cables where they were laid, that is, next to the power cables and sockets.

This time we used the same 8m cables as in the first test, but moved the mains cable as far away from the power and extension cables as possible.

In this test, we removed one of the 8 cables and replaced it with a meter long Cat 5e cable.

In the last test, we replaced the 8th Cat 5e cables with the 8th Cat 6 cables.

In general, our testing of different cable configurations did not show a serious difference, but conclusions can be drawn.

Test 2: Reduce interference from power cables.

On smaller networks, such as our home network, tests show that you don't have to worry about running LAN cables near power cables, outlets, and extension cords. Of course, the pickups will be higher, but this will not give a serious effect on the network speed. However, with all that said, it is best to avoid laying near power cables, and you should remember that in your network the situation may be different.

Test 3: we reduce the length of the cables.

This is not a completely correct test, but we tried to find the difference. It should be remembered that replacing an eight-meter cable with a one-meter cable may cause the result to be influenced by just different cables than differences in distance. In any case, in most tests, we do not see a significant difference, with the exception of an anomalous increase in throughput during copying from the client drive C: to the server C:.

Test 4: Replace Cat 5e cables with Cat 6.

Again, we did not find a significant difference. Since the cables are about 8 meters long, longer cables can make a big difference. But if you do not have the maximum length, then Cat 5e cables will work quite well in a gigabit home network with a distance between two computers of 16 meters.

It is interesting to note that cable manipulation had no effect on data transfer between computer RAM disks. It's quite obvious that some other component on the network was limiting the performance to the magic number of 111 MB/s. However, such a result is still acceptable.

Do gigabit networks provide gigabit speeds? As it turns out, they almost do.

However, in real conditions, network speed will be severely limited by hard drives. In a synthetic memory-to-memory scenario, our gigabit network delivered performance very close to the theoretical limit of 125 MB/s. Normal network speeds, taking into account the performance of hard drives, will be limited to 20 to 85 MB / s, depending on the hard drives used.

We also tested the effect of power cables, cable length, and switching from Cat 5e to Cat 6. In our small home network, none of the above factors significantly affected performance, although we want to note that in a larger and more complex network with longer lengths these factors can influence much more strongly.

In general, if you transfer a large number of files on your home network, then we recommend installing a gigabit network. Switching from a network to 100 Mbps will give a nice performance boost, at least you will get a twofold increase in file transfer speed.

Gigabit Ethernet on a home network can give you a greater performance boost if you read files from a fast NAS that uses a hardware RAID array. On our test network, we transferred a 4.3 GB file in just one minute. Over a 100 Mbps connection, the same file was copied for about six minutes.

Gigabit networks are becoming more and more affordable. Now it remains only to wait for the speed of hard drives to rise to the same level. In the meantime, we recommend building arrays that can bypass the limitations of current HDD technology. Then you can squeeze more performance out of a gigabit network.

Gigabit internet at home - and what to do with it? Testing high-speed network and look for its weak points

The Internet is expensive, the speed is low - such complaints will probably always appear. Having assessed both aspects, one can object: the prices in Belarus for access to the network, taking into account all the features, are quite acceptable. But what about the speed?.. For someone, a megabit is enough, for another, even 100 will not be enough. It all depends on the needs, and modern content cannot be called “light”, and its providers do not care too much about the “width” of the channel. As an experiment, we asked the Internet operator Atlant Telecom to provide home gigabit Internet - in order to understand whether the Belarusian needs the notorious 1 Gbps.

What connection speed is considered comfortable? In some countries, the “social” minimum is considered to be 5 megabit internet. Leadership has been held by South Korea for a long time with an indicator of 28.6 Mbps, the average global indicator is 7.2 Mbps. For comparison, in Belarus, according to a report by Akamai, the average speed is about 9.7 Mbps, and our country is in the fifth ten of the world ranking, which is a good indicator.

But what is the mythical gigabit Internet? Mythical for a simple user who does not know what a data center is, the Internet of things, big data, and so on. That is, for 95% of Belarusians. In principle, even today it can be available to Belarusians, but telecom operators for some reason do not offer such tariffs, or the offer is limited. Although a few years ago at least one option.

Connection

Until the moment of connection, I used a tariff with a 50-megabit connection for a long time (100 Mbps in peering). The pros and cons of such a connection are familiar to many: torrents are able to take the entire dedicated channel, but IPTV and games do not suffer much at the same time - the speed is enough for everything to work simultaneously.

Actually, the transition to an (even more) high-speed connection consisted in laying a new cable directly from the operator's equipment, replacing the twisted pair cable in the apartment itself and the router - and the speed increased 20 times. Then a few surprises awaited us.

The first presented the popular Speedtest. When trying to measure the connection speed, the camera equipment sent me to the "ban" (due to the peculiarities of the Speedtest algorithms). It took some time to solve the problem - the provider reconfigured the hardware.

Now, when some incredible values ​​appeared on the “speedtest” scoreboard, the time has come for the second surprise: it turns out that not every server in Belarus is able to “raise” that same gigabit. Well, let's try foreign ...

The server refused to measure the speed - either "lay down", or sent it to the "ban"

Only some measurement results are given, and the Yandex service did not want to overclock

Some remote hosts were sensitive to the load, blocking access, but nevertheless, the speed varied from 450-550 Mbps in the USA (Cupertino) to 930 Mbps in the Russian direction, and Europe - Germany, France, Poland - was approximately in the middle .

Synthetic tests are completed, you need to try something close to reality. We are looking for files on the p2p network, and then launch Steam. In the first case, the most popular file allowed us to approach the mark of 41 MB per second. This is not the limit, but the result is indicative - it does not reach the maximum.

In the Valve service, a game with a volume of about 30 GB was selected. Click "Install", but the speed above 330 Mbps (41 MB per second) does not rise. What is that rustling under the table? That's the bottleneck - the hard drive, which has exhausted its capabilities. We select SSD in the settings, and the speed rises to 64 megabytes per second (about 512 Mbps).

Internet is available, no speed

What conclusions can be drawn? In the area of ​​​​responsibility of the operator, everything is in order - gigabit is connected to the router, but then “gags” begin. The main reasons for the reduced speeds are obvious: a hard drive that is unable to provide data recording (a budget SSD may also not be able to cope with the task), the overall performance of the computer, insufficient file upload speed by the source (it can be limited by the remote side programmatically).

If the user has his own router, it is possible that this device will also be a weak link - we are talking about its processor and ports. In addition, in a gigabit device, the Ethernet ports may well be 100-megabit. Well, the seemingly banal reason is the wires. The old or cheap twisted pair cable, which lies in many houses under and above the baseboards, has a 4-core cable, and it will not pull gigabit no matter how hard you beat it. Wireless connections are even more difficult.

“How does it happen? You buy a router with “gigabit” written on it, but this does not mean that this speed will be always and everywhere. Usually we are talking about the speed between LAN ports, but between LAN and WAN ports it may not be. Therefore, operators have recommended and tested models with guaranteed performance.

There is even more marketing in the wireless connection. A simple example: the inscription "300 Mbps" or "1100 Mbps" for Wi-Fi",- gives an example of the head of the department for fixed line velcom Oleg Gavrilov. The cable implements duplex communication with the same performance in each direction.

Wi-Fi works differently, and 1100 Mbps means that the speed will be divided approximately equally. In addition, at speeds greater than 300 Mbps, the parameters for the two ranges are indicated, which are summed up. “And honest manufacturers next to the speed indicator also mark that the data was obtained in laboratory conditions, where there is not a single interference,”- added Oleg.

What else affects the data transfer rate? LAN ports practically do not process information (more precisely, the processor is minimally involved), and WAN turns out to be much more demanding on the performance of the device - the router. Accordingly, the question of price arises - the more powerful the processor, the higher it is even with other "ordinary" characteristics.

“Next is the terminal device: laptop, computer, TV, set-top box. 2017 is in the yard, and gigabit network cards are available in all more or less modern PCs. With other devices, there are nuances, especially if a “frugal” mobile processor is installed in the same laptop.”

Much, if not all, depends on what the user is doing online. When surfing, it will be unrealistic to utilize even part of the same 100 megabits - 5 is enough. If you watch videos, download files, play online games, then 50 Mbit / s is more than enough. But here we are talking not only about the data transfer rate, but also about the capabilities of the same computer and codecs: “Want to watch 4K over the Internet, but it does not go or switches to Full HD? It turns out that the subscriber device simply does not pull such content. Practice has shown that YouTube delivers 4K content without delay (on a tariff of 50 megabits, it was often necessary to wait for download). The same with 8K, but the computer can’t cope with it, showing a slide show.

From a technical point of view, a 50 Mbps channel will be sufficient for streaming 4K content - with correctly built routes. Today in households, as a rule, there is a single consumer of ultra-high definition video - the TV. Yes, and those units. Well, monitors, which are probably more, but the advantages of 4K movies, which you won’t find in the afternoon with fire, are lost on a small diagonal. However, over time, the need for them will form.

Loading - 5%

Based on Internet usage models, even when a gigabit connection is connected, the user behavior pattern will practically not change: you can indulge in tests, download a couple of games, movies, and then return to your usual way of life (“jocks” and organizers of “home networks” are not taken into account ).

Oleg Gavrilov agrees with us: “Now it is no longer fashionable to download “in reserve”. Everything can be viewed online.

Objectively, it is, but even without it, Internet consumption in my case did not increase. Of course, in the first days the traffic showed new records - during the time I used the gigabit channel, I downloaded only 48 GB more than usual. And this is due to enhanced tests. Then the traffic consumption gradually fell to the previous values.

Today, large Belarusian operators providing Internet access are increasingly looking towards GPON technology (unlike Ethernet, this means “optics to the apartment”, and not “optics to the house”). It has more capabilities and, among other things, does not require regular replacement of passive infrastructure when speeds increase.

It is logical to assume that with the spread of 4K and virtual content in Belarus, the need for speeds will also grow. But for now, Belarusians will have to wait.

I recently visited an internet forum where people were discussing their 1 gigabit fiber internet connections. "Lucky them!" I thought. But is it really about luck? If you notice that instead of 1 Gbps you get about 80 Mbps, or even less, the problem may be the wrong Ethernet cable.

In this article, we will tell you how to choose the right Ethernet cable for maximum Internet connection speed.

WiFi vs Ethernet

Let's find out right away that an Ethernet cable provides faster Internet connections than Wi-Fi. Yes, a wireless network is very convenient, but if you want to get the best internet speed, then you should use an Ethernet cable.

Ethernet to the rescue!

Naturally, if you have a wired network and a very fast broadband internet, you do not want to use a 100 Mbps (Fast Ethernet) connection between your computer and your ISP's modem. That would be stupid! You need gigabit internet.

All you need to do is connect all your home devices with inexpensive Cat 6 Ethernet cables, and use cheap gigabit switches as "nodes" to connect your devices.

My home network looks like this:

Pretty simple, right?

The orange line is a Cat 6 Ethernet cable. You simply connect computers, routers, laptops with these cables and everything “just works”.

However, you should note that some laptops come with cheap built-in Fast Ethernet adapters that only offer 100Mbps connection speeds. If you have such a situation with your computer, buy a gigabit USB-ethernet adapter.

But which switches and Ethernet cables should you buy?

This is also a pretty easy question.

For Ethernet switches, you need a quality "gigabit Ethernet switch". We recommend purchasing the 8-port D-Link Gigabit DGS-108, which is great for home use.

This switch is very convenient to use: when you plug in the Ethernet cable and the connector flashes green, then it is running at 1 gigabit. If the indicator is orange, the speed is only 10 or 100 Mbps. Thus, you can determine which Ethernet adapter is used in your computer, as we have already discussed above.

As for Ethernet cables, you just need to make sure you are using Cat 6 (category 6). Ethernet cables usually have a category printed on them, such as:

Please note that there are other types of Ethernet cables such as Cat 5, Cat 5e, Cat 6a, etc. Any cable that is labeled Cat 6 is a great option for our situation (regardless of the letter at the end, if any). Cat 5 Ethernet cables should not be purchased because they are designed to operate on networks less than 1 Gbps.

By the way, connectors on Ethernet cables do not play a special role in the quality and speed of the signal. The four twisted pairs of wires inside the cable play a much greater role. The higher the category, the faster the cable will transfer data. That's why you should use Cat 6 or higher. Cat 6 is for Gigabit Ethernet!

Also, you don't have to worry about shielding if you're buying a pre-made cable. Just make sure it's Cat 6 and full speed ahead!

We have prepared some tips and notes on using Ethernet cables throughout the home:

  • Do not unwind the network cable;
  • Do not pinch the cable in the door;
  • Do not bend the cable at right angles; round it off at the corners.

Cat 6 Ethernet cable is slightly stronger than others because it has a plastic core that accommodates twisted pairs of wires. But you still shouldn't abuse the strength of the cable. The more you squeeze the cable, the more the wires inside will move, and the slower the data transfer rate will be.

With a few simple tips, you can make your home network as fast as possible. A 1 Gbps internet connection is no problem, of course, if your ISP offers such fast broadband.


Massachusetts technological university in May 1993, the world's first online newspaper, The Tech, was opened.

By 2008, the total distribution speed exceeded 172 Gb / s, which accounted for 1/4 of the total traffic of the Moscow traffic exchange point MSK-IX. About 3 thousand requests from clients per second - 10 million per hour, 240 million per day. 40,000 thousand packets per second on the network interface. 15,000 interrupts per second. 1200 processes approximately in the top. Loading on an 8 nuclear machine - 10-12 at peak times. And all the same the part of requests dropped. Failed to serve. Unfortunately to find contemporary meaning peer-to-peer traffic failed, who knows - share in the comments for comparison.

In Ukraine, in August 2005, a regional tracker appeared - torrents.net.ua, the need to create a resource was due to the lack of high-speed and unlimited access to world traffic for most users in Ukraine.

Until September 2008, the tracker was closed to users outside the UA-IX zone, so the number of users grew at a slow pace.

First hosting providers

What about website hosting? At first, there were no hosting providers as such. The websites were hosted on the servers of universities and organizations with a permanent connection to the Internet. Tracking what happened in the period 1991-1995 is now quite problematic. In 1995, the Angelfire service offered free as much as 35 KB of space for user pages, and GeoCities - a whole 1 MB. Read more about the first steps of hosting in the article of the same name "The first steps of hosting", which was published on our blog earlier and is perhaps one of the most complete today.

$200 / month for 200 MB server quota and 3000 MB upload (500 MB minimum tariff plan), and traffic over the limit was paid at the rate of $55 to $27 per GB). You could also connect a "dedicated line" for your site, the tariffs were as follows: 128K - $395 / month, 384K - $799 / month, 1M - $1200 / month. Connecting the "channel" and activating hosting also provided for an installation fee of about one monthly fee. At the end of 2000, the same provider offered unlimited disk space on a traffic-only basis and reduced the cost of traffic to $40 per 20 GB. And already in 2002, he lowered the tariffs to $20, made the traffic "unlimited" and again introduced quota restrictions.

The prices for renting the first dedicated servers in 2000 are also interesting:

A server with 8 GB HDD looks like a real "fossil" nowadays. But what can I say, I personally used until 2004 a PC with an HDD, where the usable quota was about 7 GB. And of course, the $5000+/month fee for 6Mbps to the server looks creepy right now. The price was later reduced to $300/Mbps, but still it was not a little.

It goes without saying that the reduction in prices for connectivity and the cost of Internet access took place due to an increase in the number of subscribers and the construction of new communication channels, including underwater optical highways. When faced with all the complexity of laying cables on the ocean floor and find out the approximate cost of the project, it becomes clear why 1 Mbps across the Atlantic could cost $ 300 / month and even more. You can read more about the history of the development of backbone underwater Internet networks in our article:

In Ukraine and the Russian Federation, the process of hosting your own sites began, perhaps, with the free hosting narod.ru from Yandex, in 2000:

There was also a similar project from mail.ru - boom.ru, but this free hosting has not received such distribution as Narod. Subsequently, the free hosting of "Yandex" was absorbed by the most successful free website builder and hosting of 2008-2010 - "uCoz" and the opportunity to build a website using "uCoz" tools is now available for the narod.ru domain. "Yandex" abandoned the "People" due to development social networks and a decrease in interest in the service of building your own sites.

Until 2002, it was profitable to host your own servers in Ukraine only with home network providers, although most kept their servers in offices and even at home due to very expensive traffic for the collocation service, although this violated the terms of service for home subscribers. Many people simply preferred to use ordinary stationary computers for these purposes and not spend money on “server” hardware. Such aksakals are found even today. But if then it was possible to understand why you want to make yourself a “hosting” at home, now it is difficult to understand. And it's not about people who like to do tests of something and for this they need a server at home.

The situation abroad was better, because there the Internet became available to the population earlier and the development process was started earlier. The Netherlands is gradually becoming a "mecca" for hosting servers, as it offers a good geographical location, which means connectivity with many operators, low electricity prices, loyal legislation that contributes to the growth of the IT sector.

So, in 1997, two commercial airline pilots decided to start a company that helped other companies get a presence on the Internet by creating an Internet directory, as well as providing services for creating and hosting websites and connecting to the Internet. The Internet Archive preserved a 1998 version of the website, which, however, did not contain anything other than a contact:

Although, as we can see, there was another move - the amount of included RAM became much less by default :)

At the same time in Ukraine, one of the largest cable providers of Internet and television, Volia, realized that building its own data center was a huge necessity. Since home Internet subscribers mainly “pump” traffic, while the outgoing channel remains practically free and unused. And these are hundreds of megabits, which could well be sold by placing subscribers' dedicated servers. Plus, it is possible to save a lot, since a lot of subscribers could use the resources hosted in the data center, instead of downloading from expensive foreign servers.

This is how the Volya data center arose, which already in 2006 offered the following conditions:

In fact, offering Ukrainian traffic without taking into account, with payment for the consumed foreign traffic. It is noteworthy that incoming foreign traffic cost an order of magnitude more than outgoing traffic, which is understandable, because it was used by home Internet subscribers. Plus, as a rule, for servers that generate traffic, request traffic is small and ranges from 2 to 30% of outgoing traffic, depending on the type of resources that are hosted on the server.

So, if these are web pages with a large number of elements, then the amount of request traffic is higher, since there is confirmation of the successful loading of each of the elements, which leads to an increase in the growth of traffic entering the server. Also, incoming traffic can be generated by subscribers when they upload something to the server. When downloading files, the % of incoming traffic is negligible and is less than 5% of the outgoing in most cases.

It is also interesting that placing your own server in the Volya data center is simply unprofitable, since the cost is the same as when renting. In fact, the Volya data center offers servers of various classes for rent for free, depending on the chosen tariff plan.

Why servers can be free? The answer is very simple. The equipment is standardized, purchased in large quantities. In fact, in this version, everything is easier to maintain, easier to administer, automate, less man-hours are required. When placing subscriber servers on colo, a number of problems arise, ranging from the fact that the server may not be standard and cannot fit in the rack, you will have to allocate more units for placement than originally planned, or refuse the subscriber, referring to a non-standard case, ending the fact that you need to allow the subscriber to the site, provide the opportunity to conduct physical work with the server, store spare parts on site and allow engineers to replace if necessary.

Thus, “colo” turns out to be more expensive to maintain and it makes no sense to provide it at lower rates for the data center.

In Russia, at this time, data centers went further and began to offer conditionally unlimited traffic for free. For example, Agave offers the following conditions:

Incoming and outgoing traffic is unlimited and completely free. The following conditions must be met:

Incoming traffic must not exceed 1/4 from outgoing.
Outgoing foreign traffic should not be larger than the outgoing Russian.
Note: traffic is divided into Russian and foreign geographically.
Failure to comply with these conditions is paid at the following rates:

Exceeding the incoming by 1/4 of the outgoing is paid at the rate of 30 rubles / GB.
The excess of outgoing foreign over outgoing Russian is paid at the rate of 8.7 rubles / GB


It is noteworthy that for ease of accounting in the data center, they did not bother with a list of networks that did not belong to MSK-IX, SPB-IX (traffic exchange points in Moscow and St. Petersburg), which, moreover, had long been interconnected to ensure good connectivity of St. Petersburg with M9 or M10 (traffic exchange points in Moscow) and vice versa. Since the Internet was still not widespread in the regions and the percentage of traffic was really small, especially on leased lines. What can I say, Norilsk received its own fiber only in September 2017, only this year, and became the last major Russian city to receive its own fiber! The project cost was about 40 million US dollars, the total length of the FOCL from Novy Urengoy is 986 km, the bandwidth is 40 Gbit / s with the possibility of expanding to 80 Gbit / s in the future.

It's funny to see how in 2017, some people are able to rejoice high speed internet, which has been available to most of us for over 10 years now:

Well, firstly, I already started watching youtube, before that I watched it once a year when I went to Moscow. I opened it right like this ... and I could not be pulled out, and if I also went into the torrent and downloaded something, then this is generally ... And now I can calmly watch. Let's say a video comes out, I watched it once a week and I don't need to watch this whole set of information at a time. And I can Skype with people! It's generally straight! I go like this and shoot: “I’m going guys, look it’s winter!”, The only negative is that the iPhone is cut down in the cold.

You can watch a detailed video about the FOLS project here: part 1 , part 2 , part 3 , part 4. The only thing, keep in mind that the journalists made inaccuracies, the same satellite channel, according to them, was only 1 Gb / s per city, although in fact the total traffic before the introduction of FOCL was about 2.5 Gb / s. Well, you need to understand that the problem was not only in speed, but how much in high ping, which was obtained when using satellite Internet to which it came back again at the time of the FOCL accident.

At the end of 2006, the first sites with online movies, file hosting and other similar resources appeared in the Russian Federation, and in order to reduce the cost of foreign traffic, since Ukrainian traffic can be impressive and do not fit into the ratios prescribed by the same Agava, some of the servers are large projects they try to place in data centers with a connection to UA-IX or create additional Russian traffic artificially, using torrent "s that were distributed exclusively to Russian users, and in some cases file hosting services that were available exclusively for Russian IP addresses. As a result, if in Ukraine, I wanted to download fully and at a good speed, many of the users bought a Russian VPN, since the speed on the same ifolder.ru was always higher from the Russian Federation:

File hosting services, despite the popularity of torrent, are gaining explosive popularity, since the download speed from them is often much higher than when using torrent, while you do not need to distribute and maintain a rating (when you give more than you download, or at least not more than 3 times less). It’s all to blame for the asymmetric DSL channel, when the upload speed was significantly lower than the receive speed (10 times or more), and we must not forget that not every user wanted to “seed” and store many files on their computer.

So, Wnet paid the subscriber for Ukrainian traffic at the rate of $1 per GB, while foreign traffic cost the subscriber $10 per GB, provided that the ratio of outgoing traffic to incoming traffic was 4/1. Of course - it was still a significant price, because the traffic turned out to be free only if there was 10 times more Ukrainian traffic. Thus, in order to generate 9 Mbps for free abroad, it was necessary to generate 90 Mbps to Ukraine. Which was completely different from Agave's proposal, where it was enough that foreign traffic did not exceed Russian.

Therefore, the previously considered offer from the Volya data center was much more profitable than the offer from Wnet, which, moreover, on October 1, 2006 decided to withdraw from the Ukrainian traffic exchange point UA-IX, since UA-IX refused to sell more ports that Wnet needed, possibly as a result of a “peer war”, namely lobbying for the interests of other providers that Wnet began to compete with, or maybe due to the lack of those. the possibility of providing additional ports, or maybe because "Wnet" violated the agreement and built peer-to-peer inclusions with other participants in the exchange point (a feature of the exchange rules

Thanks to this, in 2008 Volya already had 20 Gbit / s connection to UA-IX and 4 Gbit / s to the world from several backbone operators. Further development The hosting services market can be traced already in our history:

Since we started providing hosting services in 2006 among the users of our resource, and since July 2009 we have separated the services into a separate project - ua-hosting.com.ua, which in the future went international and completely moved abroad and is now known under the ua-hosting.company brand and is available via the short domain http://ua.hosting .

It is worth noting that over the past 10 years the market has undergone huge changes, and the reason for this is not only a significant reduction in the cost of main channels, but the redistribution of the audience among many projects due to the closure of once popular projects. Successful resources such as file hosting, which used to be the top traffic in the Alexa rankings, have fallen into oblivion, for many reasons, but mainly because of the ongoing war with copyright holders.

So in Ukraine, the once famous ex.ua, which generates over 15% of all traffic of the Ukrainian UA-IX exchange point (in fact, the traffic exchange point in Kiev, since regional operators were rarely represented, especially with the advent of the Giganet and DTEL-IX exchange point) was closed after the closure of the equally famous fs.to, which at one time bought 100 Gbit / s in the Netherlands from us. And the case with the once famous megauload was even more resonant when over 600 servers of this file hosting service were seized from the data center in the Netherlands where we are located. Rutracker was blocked on the territory of the Russian Federation by Roskomnadzor, and torrents.net.ua ceased to exist in Ukraine due to fear of reprisals.

The audience went to Youtube, Instagram and other social networks. networks. Sites for an adult audience, perhaps, have not lost popularity, only now earnings on teaser advertising for our webmasters from the Russian Federation and Ukraine have lost all meaning due to advertising prices and pay for foreign channels, the price of which, by the way, has decreased significantly in comparison even with 2012, when it seemed that it could not be cheaper, it became quite problematic.

The situation in the market of trunk channels, which determines the relative cost of traffic delivery

As we could understand, having read the information presented above, the price of Internet traffic depends on where the traffic needs to be delivered, how popular this direction is, at what speed it is necessary to transfer data to the stream and with what delay. And also the price will depend on which communication channels the traffic will pass through, which determines how direct the route is and what priority the traffic will have in one case or another, which in turn will determine the final value of latency (ping) from one point to another.

For example, 10 Gbit / s from Novy Urengoy to Norilsk will obviously cost not $2,000 / month and not even $6,000 / month, since over $40 million was invested in the construction of FOCL. 40 Gb/s is 40/15/12 = $0.22 million or $55,000/month for 10 Gb/s, and this is not yet a channel to the Internet, but only the cost of delivering traffic over a high-quality FOCL between two remote settlements. And this money needs to be taken now from the population of Norilsk, which will watch the same Youtube (traffic to which will cost even more, since it will be necessary to pay for highways for delivery up to Youtube networks), which means that traffic from it will be quite expensive and the activity of the population there will be restrained by this price. There is an option when Youtube may want to be “closer” to its users and will want to pay part of the channel costs to them instead of them, in which case the cost of access to the Youtube resource for the population of Norilsk may become lower. This example clearly demonstrates what the price of access to specific Internet resources can consist of. Someone always pays for your traffic, and if it's not you, it's either advertisers and resources that generate this traffic, or a backbone provider or just an Internet provider that benefits from traffic from this direction (say, in order to get discounts on other directions or any tax incentives, which can be beneficial in the case of Norilsk or simply because a fairly wide channel was bought to get a discount on traffic delivery and it is idle).

Tier I backbone operators, such as Cogent, Telia, Level 3, Tata and others, differ in that they take money for traffic delivery from everyone who is connected to them, so traffic generators try to exchange traffic with providers where their audience is located directly. Thus, situations arise when so-called peer-to-peer wars are generated, including between the first-tier backbone operators and large generators, when priority is given to specific consumers, while for others the price of cooperation may be artificially inflated in order to crush a competitor, or but simply for the purpose of enrichment, since the traffic generator simply has no other options. Therefore, disputes arose very often, including litigation, as some companies did not maintain net neutrality and tried to do it very covertly.

So the dispute between Cogent and Google regarding IPv6 traffic has not yet been resolved, which is why it is simply impossible to make peering between companies for direct exchange. Cogent demands money from Google for traffic on their network, while Google wants to feast for free, since the mass of Cogent subscribers (data centers, home Internet providers) are active consumers of traffic from Google networks, although by the way, IPv4, not IPv6, which would reduce the delay and reduce the cost of traffic for these subscribers, subject to an increase in % of IPv6 traffic. But this is apparently unprofitable for Cogent, since it is a Tier I backbone provider, and external traffic from its networks is paid for by second-tier backbone providers (they pay backbone providers of Tier I and receive profit from third-tier providers) and even third-tier providers (pay second-tier providers and receive money from end customers).

In order to understand what constitutes the final price of traffic for a resource, consider the situation using an example popular service Cloudflare, the essence of which is to make websites “closer” to their audience, help reduce the load on the infrastructure by caching static information and filter out possible DDOS attacks.

Of course, Cloudflare hosts servers in all regions where there is a demand for traffic, that is, almost all over the world. And in order to save on traffic, it tries to enter into peering agreements with regional providers that are able to deliver traffic from Cloudflare to users for free, bypassing expensive Tier I backbone operators that charge for traffic in any cases. Why do local providers benefit from this? With significant amounts of traffic, they need to pay, like Cloudflare, Tier I level operators significant funds for delivering traffic, it is much more profitable to connect your channel “directly” (invest once in construction) and receive traffic for free, rather than paying a lot of money monthly to the backbone operator. Even in cases where direct peering is not possible, it can be more profitable to connect through the networks of other transit providers, where the cost of traffic will be much lower than the cost of traffic when transmitting through Tier I. Yes, the route becomes not very direct, ping may increase slightly, transmission speed may drop slightly per stream, but the quality may still be acceptable to realize such savings.

But it is not always possible to conclude peering agreements, yet in some regions Cloudflare is forced to buy a fairly large% of connectivity from backbone providers, and the price of traffic varies greatly depending on the region. Unlike some cloud services, such as Amazon Web Services (AWS) or traditional CDNs, which often pay per terabytes of traffic, Cloudflare pays for the maximum usage of the channel for a certain period of time (the so-called "traffic flow"), based on the maximum number of megabits per second that are used during the month by any of the backbone providers. This accounting method is called burstable, and a special case is called 95th percentile. The 95th percentile is a technique used to provide flexibility and batch bandwidth usage. This allows the consumer of services to exceed the bandwidth set by the tariff by 5% of the total time of using the channel, without increasing the cost. For example, if your tariff assumes the use of a bandwidth of 5 Mbps, then the bandwidth limit can be exceeded by 36 hours every month (5% of 30 days). Bandwidth usage is measured and recorded every 5 minutes for a month, as an average over that small five-minute period. The bandwidth used in each time interval is measured by dividing the amount of data transmitted per interval by 300 seconds (the duration of the specified interval). At the end of the month, 5% of the maximum values ​​are removed, and then the maximum number is selected from the remaining 95%, and it is this value that is used to calculate the payment for the channel width.

There is a legend that in early days Since its inception, Google has used 95th percentile contracts to index at very high bandwidth for one 24-hour period, and the rest of the time the traffic intensity was much lower, thereby providing significant savings on the cost of consumed channels. A smart, but certainly not very durable strategy, since later on we still had to build our own data centers and even channels in order to index resources more often and pay less for intercontinental transport traffic.

Another "subtlety" is that you usually pay backbone providers for the prevailing traffic (inbound or outbound), which in the case of CloudFlare allows you to completely not pay for incoming traffic. After all, CloudFlare is a caching proxy service, as a result of which the output (out) usually exceeds the input (in) by about 4-5 times. Therefore, bandwidth is billed exclusively according to the values ​​of outgoing traffic, which allows you not to pay for the entrance completely. For the same reason, the service does not charge additional fee when the site comes under a DDOS attack. The attack will certainly increase the consumption of incoming traffic, but if the attack is not very large, incoming traffic will still not exceed outgoing traffic and, therefore, this will not increase the cost of the channels used.

Most of the peer-to-peer traffic is usually free, which is not the case with the traffic from the Netflix service, which, after a long debate, had to pay Verizon and Comcast for peer-to-peer inclusions in order to provide acceptable video streaming for users from their networks.

In the diagram above, we can see how Cloudflare's free peering inclusions have grown over the course of 3 months, with both IPv4 and IPv6 versions of the Internet Protocol. And below, also within 3 months, we can observe the global growth of Cloudflare peer-to-peer traffic, which currently uses over 3,000 peer-to-peer connections and saves about 45% of the cost of expensive backbone transit traffic.

Cloudflare does not disclose exactly how much it pays for backbone transit traffic, however, it provides comparative values ​​from different regions from which one can draw an approximate conclusion about the amount of costs.

Consider North America first. Let's assume that we take a mixed average of $10 per Mbps per month across all transit providers as our benchmark in North America. In fact, the payment will be less than this amount and will depend on the volumes, as well as on the selected backbone operator, but it can serve as a benchmark for comparing costs with other regions. If we accept this figure, then each 1 Gb / s will cost $ 10,000 per month (again, do not forget that this value is higher than the real one and is typical, as a rule, in the case of retail purchase, this is just a benchmark that allows you to understand the difference).

The effective price for bandwidth in a region will be the combined cost of transit ($10 per Mbps) and peer-to-peer traffic ($0 per Mbps). Each byte transmitted over the peering is a potential transit byte, for which you do not need to pay. While North America has one of the lowest transit prices in the world, it also has lower average peering rates. The graph below shows the ratio between peer-to-peer and transit traffic in a region. And while it has gotten better for Cloudflare over the past three months, North America still lags behind every other region in the world in terms of peering.

Thus, while the percentage of peer-to-peer traffic at Cloudflare exceeds 45% globally, in the North American region it is only 20-25%, which makes the effective cost of 1 Mbps without discounts equal to $7.5-8 per Mbps As a result, North America is the second region in the world ranking of regions with the cheapest traffic. But where is the cheapest traffic?

We have already considered Europe, where for a very long time, in view of the historical concentration of the population in certain regions, there are a lot of exchange points, it is due to this that it is possible to get a larger% of peer-to-peer traffic, and, as a result, the cheapest traffic in the world, since % transit traffic from the same Cloudflare is at the level of 45-50%.

As a result, our reference cost drops to $4.5-5 per Mbps or less. % of peer-to-peer traffic depends directly on the number of participants in the largest exchange points in Europe - AMS-IX in Amsterdam, DE-CIX in Frankfurt and LINX in London. In Europe, traffic exchange points are mainly supported by non-profit organizations, while in the USA, exchange points are mainly commercial, the same Equinix in New York, which significantly affects the number of participants in these exchange points and, as a result, peerings that are ready to sign peering agreements. For comparison, in Amsterdam, according to statistics for 2014, there are about 1200 participants, while in the USA there are only 400.

The ratio of peer-to-peer traffic to transit traffic in the Asian region is approximately the same as in Europe, Cloudflare shows a graph with a value close to 50-55%. However, the cost of transit traffic is 6-7 times higher compared to the reference cost of traffic in Europe and is as much as $70 per Mbps. Thus, the effective cost of traffic is in the range of $28-32 per Mbps, which is 6-7 times higher than in Europe.

In general, the cost of transit traffic in the Asian region is higher due to several factors. The main one is that there is less competition in the region, as well as more monopoly providers. Second, the Internet services market is less mature. And finally, if you look at the map of Asia, you will see a lot of things - water, mountains, hard-to-reach sparsely populated regions. Running submarine cables is much more expensive than laying fiber optic cable in the most remote region, although it is also not cheap, so the cost of intercontinental transit and the cost of transit through remote areas is offset by the cost of transit within the rest of the continent, which is artificially inflated to cover costs on external and internal "remote" connectivity.

Latin America became a new region in which Cloudflare had a presence and already within 3 months after that, the percentage of peer-to-peer traffic increased from 0 to 60%.

However, the cost of transit traffic, as in Asia, is very high. Transit traffic costs 17 times more than transit traffic in North America or Europe, and the effective cost is $68 per Mbps, which is twice as high as in the Asian region, despite the fact that the percentage of peer-to-peer traffic is one of the best here in the world. Problem Latin America also in the fact that in many countries there are no data centers with a “carrier neutral” policy, when participants can freely switch and exchange traffic with each other. Brazil turned out to be the most developed in this regard, and after Cloudflare built its own data center in Sao Paulo, the number of peerings increased significantly, which made it possible to reach 60% of the exchange traffic value.

The most expensive for Cloudflare is, perhaps, the traffic in Australia, as it involves great amount underwater highways for traffic delivery. And although the percentage of peer-to-peer traffic within the region reaches 50%, Telstra, a monopoly operator in the Australian telecommunications market, does not allow reducing the cost of transit traffic within the country below $200 per Mbit / s due to the large distribution of the population in the region, which is 20 times higher than the reference value in Europe or USA. As a result, the effective cost of traffic in 2015 is at $100 per Mbps and is one of the most expensive in the world. And transit traffic costs are about the same amount that Cloudflare spends on traffic in Europe, although the population of Australia is 33 times smaller (22 million versus 750 million in Europe).

Interestingly, in Africa, despite the high cost of transit traffic - around $140 per Mbps, Cloudflare managed to negotiate peering with 90% of providers, as a result of which the effective cost of traffic was at the level of $14 per Mbps. Thanks to this, websites from London, Paris and Marseille began to open quite quickly, and the presence in West Africa was increased, providing faster access to European resources for the inhabitants of the same Nigeria, where there are about 100 million Internet users. And in the Middle East region, the percentage of peer-to-peer traffic even reached 100%, which made it the cheapest for CloudFlare in the world, if you do not take into account the costs of building and maintaining data centers.

Already after 2 years, at the end of 2016, prices in the most expensive region - Australia and Oceania, became 15% lower, which made it possible to obtain a traffic price of $ 85 per Mbps. Thus, in the case of Cloudflare, the statistics became something like this:

It is interesting that today there are 6 most expensive backbone providers - HiNet, Korea Telecom, Optus, Telecom Argentina, Telefonica, Telstra, traffic from which is much more expensive for Cloudflare than traffic from other connectivity providers around the world, and which they refuse to discuss lower transit prices. For the same Cloudflare, the total traffic to these 6 networks is less than 6% of the total consumption, but almost 50% of the funds allocated to pay for the total connectivity accounted for paying for traffic from these 6 most expensive networks. Of course, this could not go on forever and Cloudflare decided to redirect the traffic of its “free” users to more remote data centers (Singapore or Los Angeles), instead of having a presence in Australia and New Zealand, where the cost of external channels is more reasonable, in as a result, ironically, the same Telstra began to pay more, as their underwater highway was more busy after this transition, which may serve as a good signal for lower prices for resources such as Cloudflare in the region.

Absolute values ​​of the cost of traffic delivery from transit providers in different regions

Telegeography is a wonderful resource, it helps to see not only the growth of submarine backbone networks, but also what prices are offered on average in the market for traffic delivery, calculating the median when using transit backbone providers in a given region. In order to show you the current order of prices, I had to order a small report from them, as a result of data analysis in which the following results were obtained.

As you can see, the cost of the Dedicated Internet Access (DIA) service is at the maximum level in Mumbai, Sao Paulo and Caracas at the moment. While in Europe and the US, this cost is at the level of $6 and $8 per Mbps, respectively.

In accordance with this, a logical question arises, how, at such prices, it is possible to rent servers at the present time, say, configurations of 2 x Intel Dodeca-Core Xeon E5-2650 v4 128GB DDR4 6 x 480GB SSD with a 1Gbps channel and a traffic limit of 100 TV at a price of $249/month, which is equivalent to 300+ Mbps of real consumption, if the cost of 1 Mbps is at the level of $6/month per megabit on average, based on the data of the current report?

How can traffic in data centers be sold cheaper?

Large data centers, such as EvoSwitch in Amsterdam, where we are located, have their own developed backbone network of fibers, which allows significant savings on the delivery of traffic to certain regions, plus increases the number of possible peerings. In 2017, our partner became the owner of one of the largest and highest quality networks.

As we can see from the illustration, the total network capacity has reached 5.5 Tbps, 36 points of presence have appeared around the world, over 2000 peering connections, inclusions in 25 traffic exchange points directly. All this, of course, affects the effective cost of traffic, which, as we remember, is the sum of the cost of paid backbone transit connectivity and free peer-to-peer connectivity, and can also be reduced by charging fees for peer-to-peer connections from the traffic consumer provider. That is, traffic can be paid not only by the traffic generator, but also by the recipient - the provider to whose network this traffic is generated and who is interested in organizing peer-to-peer inclusion in order to pay less backbone providers and save on traffic in the same way that data saves -center. Among other things, the data center almost always has the opportunity to sell an excess “incoming” Internet channel to home Internet users who basically need just such traffic to access the Internet and which, in fact, is unused in most data centers.

Nevertheless, even such an extensive network does not allow making the cost of traffic infinitely low. Therefore, no matter what guaranteed conditions for traffic the data center offers, you need to understand that the final low cost for traffic is achieved by selling a band with a reasonable degree of oversell "a, that is, selling more connectivity than it actually is, but with a hard taking into account the real needs of data center users in traffic in a guaranteed case, when each user is provided with his guaranteed bandwidth at the right time for him.Moreover, you can save on traffic the more, the more users are served and the more, the more peering and trunk channels are connected to the network.

Consider an example. 10 users need a guaranteed 100Mbps link to their servers, however, the connectivity is not always 100% and quite often not at the same time. After analyzing the real consumption, it turns out that at the same time all ten users consume no more than 300 Mbps of traffic in peaks and purchase 1 Gbps of the dedicated bandwidth, and taking into account the reservation - 2 Gbps from different operators and charge each user for the dedicated channel in full volume (essentially double) becomes unreasonable. It is much more reasonable to buy three times less - 700 Mbit / s of traffic, if the purchase is carried out from two independent backbone operators, which will help provide a dedicated channel of 100 Mbit / s for each of the 10 clients with the specified consumption level, and even with a double degree of fault tolerance, plus, there will even be about 100 Mbps left for “growth” in case someone starts to increase traffic consumption, which will give time to connect additional channels. If the traffic is already supplied by three independent backbone providers, then the purchase becomes even more profitable, since it is enough to buy only 500 Mbps of bandwidth, or even less, because at the same time, with a high degree of probability, only one channel out of three can fail - no more than 166 Mbps / s connectivity when there is a need for a maximum of 300 Mbps. Thus, we can easily get 334 Mbps of bandwidth at any time, which will be sufficient for the needs of our subscribers, even if one of the uplinks fails.

In reality, the situation is much simpler and the degree of fault tolerance and redundancy is higher, since clients with a 100 Mbit / s channel are often not ten, but tens of thousands. And most of the traffic uses very little. So 1000 servers with a channel of 100 Mbit / s, excluding according to our statistics, consume on average only 10-15 Gbit / s in peaks, or even less, which is equivalent to 10-15% of the bandwidth allocated to them. At the same time, everyone is provided with the opportunity to consume 100 Mbit / s at the time they need without any consideration, and there are a great many backbone operators from which channels are established. Of course, there are even more peer-to-peer connections, which makes connectivity often more inexpensive and of higher quality and eliminates the possibility of losing a huge part of connectivity at once. Due to this, the required% allocated to ensure fault tolerance is reduced from 50 to 5% or less. Of course, there are clients who upload their channels "on the shelf", but there are also those who consume extremely little traffic, while renting a dedicated server with 100 Mbps channel without taking into account, because it's so convenient - you don't need to be afraid of an expensive fee for exceeding or simply subscribers do not understand how much traffic they really need and how to calculate it. In fact, those users who do not consume the entire bandwidth allocated to them pay for the traffic of users who use the channel to the full.

Among other things, you also need to remember about the daily distribution of traffic to Internet projects, which also affects the cost reduction. Since, if you have a channel load of 100% in the evening, at the time of maximum visits to your resource, then the rest of the day the channel load will most likely be much lower than 100%, up to 10-20% at night and the free channel can be used for other needs (we do not consider generating traffic to another region, since in this case, an expensive payment for transport will most likely arise). Otherwise, during peak hours, visitors will begin to experience problems, leave the website, and traffic will inevitably fall due to the deterioration of behavioral factors and the decrease in the position of the resource in the search results, if the project traffic is mainly search.

In the case of gigabit inclusions, of course, the % of channel utilization was above 10-15% in the initial period of the offer and could reach up to 50% or more, since such servers were previously ordered by traffic generator subscribers when they lacked 100 Mbps of the port , and a gigabit port was much more expensive and it made no sense for ordinary users to pay for it when there was no real need for it. These days, when it's possible to get 1 Gb/s and even 10 Gb/s at home, and the difference between the cost of a switch that supports 1 Gb/s and 100 Mb/s has become insignificant, it turns out to be much more profitable to give everyone access to channel 1 Gb / s, even when it really does not need it, rather than limit the bandwidth. Only in order for the client to download the amount of information he needs as quickly as possible and, as a result, free up the bandwidth for the next subscriber much more quickly in cases where he does not need to constantly generate traffic. That is why the percentage of traffic utilization for servers with a 1Gbps channel and a 100TB limit turned out to be much less than 10% in reality, since most users, of course, do not need such a channel all the time and release the channel 10 times faster for use by subsequent subscribers.

Applying this principle of providing Internet channels, there is a clear need to monitor traffic consumption in individual segments of the data center network and even in each rack, so that as soon as someone's need for a channel increases and the traffic reserve starts to decrease, an additional channel can be added, thereby provide a guaranteed "no limit" for everyone. In general, thanks to this approach, significant funds are saved on paying for external communication channels and it becomes possible to offer prices an order of magnitude lower than without applying this principle and even make money on traffic. After all, a data center cannot sell traffic at the actual cost, but is simply obliged to earn money, as it spends time and money on network maintenance, and a “healthy” business must be profitable.

Therefore, the oversell "a coefficient is present everywhere, to a greater or lesser extent, even when selling offers with a 10 Gbps Unmetered channel to dedicated servers, which, it would seem, should have consumed traffic in full. But the reality turned out to be different. We once sold over 50 dedicated servers with 10 Gbps Unmetered connections to each, but our total traffic generated barely exceeded 250 Gbps, despite the fact that this channel was used by another 900+ dedicated servers with 100 Mbps and 1 Gbps connections to each server Because of this, we were able to provide servers with a guaranteed 10 Gb / s channel at an incredible price of $ 3000 / month, and in the future - almost 2 times cheaper (starting from $ 1800 US). We were the first to sell connectivity for such low prices and that is why we managed to generate so much traffic and get a lot of satisfied customers.

Today we are ready to go even further, thanks to cooperation with the backbone operator of the I level Cogent, we have the opportunity to sell excess connectivity in certain segments of their network in the Netherlands and the USA even cheaper - from $1199 for a server with a channel of 10 Gbit / s excluding and from $4999 per server with 40 Gbit/s Unmetered channel.

https://ua-hosting.company/serversnl - you can place an order here, if you need a location in the USA - open a request in a ticket. However, the Netherlands is the optimal location for our regions in terms of connectivity.

2 x Xeon E5-2650 / 128GB / 8x512GB SSD / 10Gbps - $1199
2 x Xeon E5-2650 / 256GB / 24x512GB SSD / 10Gbps - $2099
2 x Xeon E5-2650 / 256GB / 24x1TB SSD / 10Gbps - $3599
2 x Xeon E5-2650 / 256GB / 24x2TB SSD / 10Gbps - $6599

2 x Xeon E5-2650 / 256GB / 8x1TB SSD / 20Gbps - $1999
2 x Xeon E5-2650 / 256GB / 24x512GB SSD / 20Gbps - $2999
2 x Xeon E5-2650 / 256GB / 24x1TB SSD / 20Gbps - $4599
2 x Xeon E5-2650 / 256GB / 24x2TB SSD / 20Gbps - $7599

2 x Xeon E5-2650 / 256GB / 24x512GB SSD / 40Gbps - $4999
2 x Xeon E5-2650 / 256GB / 24x1TB SSD / 40Gbps - $5599
2 x Xeon E5-2650 / 256GB / 24x2TB SSD / 40Gbps - $8599

In individual cases, we are able to offer adequate prices for 100Gbps Unmetered connections, please contact us if you need such connectivity.

Of course, Cogent and I expect that you will not consume all the bandwidth allocated to you, and the proposed configurations also contribute to this. In the case of their use with a RAID controller, it is very problematic to consume more than 6 Gb / s of traffic, since we get a "bottleneck" in the form of a controller. However, if the drives are used independently, it becomes possible to distribute the traffic in an optimal way. In any case, we ensure that the specified bandwidth can be consumed, regardless of our hopes with Cogent. Moreover, surplus connectivity is being sold, which, in cases of non-sale, would simply be idle. We should also not forget that Cogent, as a backbone provider, takes money from everyone. Thus, the traffic that you generate will, in any case, be additionally paid by the provider in whose network this traffic will come.

However, you should not expect that by purchasing a server with such a channel, you will have 10, 40 or 100 Gb / s per stream, this is simply impossible for that kind of money, and often not necessary. Streaming transport at high speed point-to-point can cost a lot of money, in some cases $55,000 for 10 Gb / s, as in the case of the Novy Urengoy - Norilsk FOCL, which we reviewed above. But the fact that excellent connectivity with the Internet as a whole will be ensured is unambiguous. The average speed per stream for most projects is enough to be above 10 Mbit / s, which allows you to host projects with video streaming in Ultra HD quality and provide viewing for 1000-4000 people "online" from one server.

However, in some cases, the rate per stream can be significant even with a small channel charge. So last year in the US, 10 Gb / s home Internet became widespread, when for a modest fee of $ 400 / month it became possible to receive such an “unlimited” channel at home.

In such cases, home routers that provide access to the network via Wi-Fi often turn out to be a “bottleneck” (capable of providing a connection of up to 300 Mbps), as a result of which it is necessary to re-apply a wired connection and even install servers at home, as well as apply productive computers and drives in them, so as not to run into their capabilities when using the channel. Why is this needed? Many people nowadays work with data from home. James Busch, a US radiologist, analyzes patient data from home and new channel saves him a lot of time.

“On average, an x-ray examination contains about 200 megabytes of data, while PET scanning and 3D mammography can take up to 10 gigabytes. Thus, we are forced to process hundreds of terabytes of data. We calculated that we save about 7 seconds on average per study when using a 10 Gb / s connection instead of a gigabit one. It would seem that this is not much, but if we multiply by the number of studies that we conduct per year, which is 20-30 thousand, it turns out that we save about 10 days of productive work only due to the fact that we have improved the connection speed by an order of magnitude.”

Thus, if you need high speed per stream at minimal cost - you need to place your 10, 20, 40 or 100 gigabit server as close to your users as possible. Then it is likely that you will be able to generate traffic to some segments of the Internet at a speed of 1 and even 10 Gbit / s per stream.

Our time offers you unique opportunities for new achievements. Now you can hardly say that some kind of hosting or dedicated server rental service is too expensive, and starting your own business or project has never been so easy. The most productive server configurations are now available, the capabilities of which exceed the capabilities of ten-year-old servers by up to three orders of magnitude in some cases, and at prices that are not much more expensive than hosting in 2005. Everyone can allow real. Traffic has become a thousand times cheaper, and the speed of channels is higher. And it will depend on you how you manage them. Everyone can come up with an interesting Internet project, stop wasting your time in vain. Rent a dedicated server or at least a virtual one and get started today, even if you don't need it yet and you don't know anything about it - it will serve as a good motivation to keep going. Use these opportunities to make our world a better place. Even if you have never had experience in web development and creating Internet projects, it is never too late to start, I once started from 0 and generated more traffic than all of Belarus! I hope my experience will be useful for you. We are building the Internet, join us!

BLACK FRIDAY CONTINUES: 30% discount on the first payment with a promo code BLACK30% when ordering for 1-6 months!

These are not just virtual servers! This is a VPS (KVM) with dedicated drives that can be just as good as dedicated servers, and in most cases better! We made VPS (KVM) with dedicated drives in the Netherlands and the USA (configurations from VPS (KVM) - E5-2650v4 (6 Cores) / 10GB DDR4 / 240GB SSD or 4TB HDD / 1Gbps 10TB available at a uniquely low price - from $29 / month , available with RAID1 and RAID10), do not miss the chance to place an order for a new type of virtual server, where all the resources belong to you, as on a dedicated one, and the price is much lower, with much more productive hardware!

How to build infrastructure corp. class with the use of Dell R730xd E5-2650 v4 servers worth 9,000 euros for a penny? Dell R730xd 2 times cheaper? Only here 2 x Intel Dodeca-Core Xeon E5-2650v4 128GB DDR4 6x480GB SSD 1Gbps 100 TV from $249 in the Netherlands and USA!

Tags: Add tags