Big Data Distribution Across The World
The research is defined within the BitBooster project.
Background and motivation:
The Internet was born in 1969 when the first data transmission between two computers along the distance of 600 km was held. It was the first step into the data transmission world. The Wide Area Network has been developed with time. And distances between communication nodes have been correspondingly increased. For now, the main communication problem is not just sending data. Today is more important to transmit it reliability at a high rate according to society’s needs
There are two well-known transport protocols that are widely used for data transmission: TCP and UDP. The principal difference between them is reliability. TCP is a reliable transport protocol that is widely used for data transport in IP networks and sub-networks. It is suitable for the transmission of data streams with guaranteed integrity, reliability, and checking of delivery. This protocol is widely spread however it has some significant drawbacks. Taking into account the fact that this protocol was created almost 40 years ago easy to understand that it was developed for completely other networks as we are using now. Data at that time were transmitted by means of TCP via local networks where:
- RTT was up to 20 ms;
- capacity was ones and tens of megabits per second.
For sure, the size of these data and required distances were extremely less than nowadays. Thus, the communication industry faced main problems: how to transmit Big Data through LFP (Long Fat Pipes) e.g. between continents, and how to avoid congestions in such connections in a more effective manner. The concept of the TCP protocol cannot fill fully all needs in data transmissions of modern society.
In the FILA group, it was decided to investigate new ways for data transition to consume existing network resources more effectively even if the quality of legacy infrastructure is poor. There is an idea to insert the algorithms for available bandwidth analysis in the communication channel during reliable data transmission. Such an approach will allow avoidance of congestion even before the congestion happened. This means this “congestion prediction” the effective bandwidth utilization will be achieved. Also, the transport protocol with this algorithm will have quick adaptation to changing of bandwidth in a connection. FILA group developed an alternative protocol. Reliable Multi-Destination Transport (RMDT) is a point-to-multipoint data transport protocol, which provides reliable multi-gigabit Big Data distribution across the world. It works in any IP-based network and can handle high packet losses as well as high RTTs and jitter.
Unique RMDT protocol features include:
- Multiple recipients within a single session
- Reliability on the full rate over WAN
- End-to-end 10G communication (1×10)
- Efficient system resources consumption
- Legacy IP infrastructure
- Suitable for streaming
RMDT protocol can be used in WAN networks in order to assure maximum bandwidth utilization. It is resistant against high losses, RTT, and delay jitter. Our research demonstrated that it is capable to provide up to 950 Mbps per receiver in presence of 2 % losses and 300 ms RTT. Due to the use of the patent-pending ABC algorithms, the protocol is able to deal with very aggressive cross-traffic with high efficient rate adaptation, which moves RMDT to the next level of intelligence in comparison with competitors.
Under the same conditions competitor’s multi-destination protocols perform more than 5-20 times slower in comparison to RMDT. FILA group was present in the CeBIT’2017 and illustrated the achieved results on the example of transmission of the FullHD video steaming with data rate up to 10 Gbps (by means of data transmission stream up to 1 Gbps each) and losses of up to 4%.
1. A. Bakharev, E. Siemens, Evaluation of Reliable Multicast Implementations with Proposed Adaptation for Delivery of Big Data in Wide Area Networks. Proc. of ICNS 2013, The Ninth International Conference on Networking and Services, Lisbon, Portugal, 2013, pp. 160-164
2. D. Kachan, E. Siemens, V. Shuvalov, Comparison of Contemporary Solutions for High Speed Data Transport on WAN 10 Gbit/s Connections. Proc. of ICNS 2013, The 9th International Conference on Networking and Services, 2013, pp. 34–43.
3. A. Bakharev, E. Siemens, Influence of Jitter on Reliable Multicast Data Transmission Rate in Terms of CDN Networks. Proc. of the 1st International Conference on Applied Innovations in IT. Köthen, pp. 31-34, 2013 (DOI: 10.13142/kt10001.05);
4. D. Kachan, E. Siemens, Comparison of Contemporary Protocols for High-speed Data Transport via 10 Gbps WAN Connections. Proc. of the 2nd International Conference on Applied Innovations in IT. Köthen, pp. 21-27, 2014 (DOI: 10.13142/kt10002.04);
5. A. Bakharev, E. Siemens, V. P. Shuvalov, Analysis of performance issues in point-to-multipoint data transport for big data. Proc. of the 12th International Conference on Actual Problems of Electronics Instrument Engineering (APEIE), 2014, pp. 431–441.
6. D. Kachan, E. Siemens, V. Shuvalov, Available bandwidth measurement for 10 Gbps networks. Proc. in 2015 International Siberian Conference on Control and Communications (SIBCON), 2015, pp. 1–10.
7. D. Syzov, D. Kachan, E. Siemens, High-speed UDP Data Transmission with Multithreading and Automatic Resource Allocation. Proc. of: the 4th International Conference on Applied Innovations in IT, Koethen, 2016, pp. 51-56.
8. S. Maksymov, D. Kachan, E. Siemens, Connection Establishment Algorithm for Multidestination Protocol. Proc. of: the 4th International Conference on Applied Innovations in IT, Koethen, 2016, pp. 57-60.
9. D. Syzov, D. Kachan, E. Siemens, Algorithm of Handling Out-of-Order Delivery for Multithreaded UDP-based Data Transport. Proc. of: the 5th International Conference on Applied Innovations in IT, Koethen, 2017, pp. 17-23.