What is the difference between a core switch and a regular switch?

We often talk about the core switch, so what is the difference between him and the conventional switch? Let’s make a comparison for them today.

First, the difference between the ports

The number of conventional switch ports is generally 24-48. Most of the network ports are Gigabit Ethernet or 100M Ethernet ports. The main function is to access user data or aggregate switch data of some access layers. Configure VLAN simple routing protocol and some simple SNMP functions. The backplane bandwidth is relatively small.

The number of core switch ports is large, usually modular, and can be freely matched with optical ports and Gigabit Ethernet ports. The general core switches are Layer 3 switches, and various advanced network protocols such as routing protocol/ACL/QoS/load balancing can be set. The main point is that the backplane bandwidth of the core switch is much higher than that of the conventional switch, and usually has a separate engine module and is the primary backup.

Second, the difference between users connecting or accessing the network

The part of the network that directly connects to the user or accesses the network is usually called the access layer. The part between the access layer and the core layer is called the distribution layer or the aggregation layer. The access layer is designed to allow the end user to connect to the network. Therefore, the access layer switch has low cost and high port density characteristics. The aggregation layer switch is the aggregation point of multiple access layer switches. It must be able to handle all traffic from the access layer devices and provide uplinks to the core layer, so the aggregation layer switches have higher performance and less. Interface and higher exchange rate.

The core part of the network is called the core layer. The main purpose of the core layer is to provide an optimized and reliable backbone transmission structure through high-speed forwarding communication. Therefore, the core layer switch application has higher reliability, performance, and throughput.

Third, what are the advantages of the core switch

Compared with conventional switches, data center switches need to have features such as large cache, high capacity, virtualization, FCOE, and Layer 2 TRILL technology:

(1) Large cache technology

The data center switch changes the outbound port caching mode of the traditional switching system. The distributed cache architecture uses a larger cache than the ordinary switch. The cache capacity can reach more than 1G, and the general switch can only reach 2~4M. For each port, the burst traffic buffering capacity of 200 milliseconds is achieved under the condition of 10 Gbits full line rate. Therefore, in the case of bursty traffic, the large cache can still guarantee zero packet loss for the network, which is just suitable for the data center server. The characteristics of large flow.

(2) High-capacity equipment

The network traffic of the data center has the characteristics of high-density application scheduling and surge burst buffering. However, the general switch meets the interconnection and intercommunication as the main purpose, and cannot accurately identify and control the service, and cannot respond quickly in large business situations. And zero packet loss, can not guarantee the continuity of the service, the reliability of the system mainly depends on the reliability of the device.

Therefore, regular switches cannot meet the needs of data centers. Data center switches need to have high-capacity forwarding characteristics. Data center switches must support high-density 10 Gigabit boards, that is, 48-port 10 Gigabit boards, so that 48-port 10 Gigabit boards can be fully line-speed. Forwarding, data center switches can only use the CLOS distributed switching architecture. In addition, with the popularity of 40G and 100G, the 8-port 40G board and the 4-port 100G board are also gradually commercialized. The data center switch 40G and 100G boards have already entered the market, thus meeting the high density of the data center application requirements.

(3) Virtualization technology

Data center network equipment needs to be highly manageable and highly secure. Therefore, data center switches also need to support virtualization. Virtualization is to transform physical resources into logically manageable resources to break the physical structure. Barriers, the virtualization of network equipment mainly includes multiple virtual ones, one virtual multi-technology, multiple virtual and other technologies.

Through virtualization technology, multiple network devices can be managed in a unified manner, and services on one device can be completely isolated, thereby reducing data center management costs by 40% and increasing IT utilization by approximately 25%.

(4) TRILL technology

In the construction of the Layer 2 network in the data center, the original standard is the STP protocol, but its drawbacks are as follows: STP works by port blocking, and all redundant links do not forward data, resulting in the waste of bandwidth resources. There is only one spanning tree in the STP network. Data packets must be transited through the root bridge to affect the forwarding efficiency of the entire network.

Therefore, STP will no longer be suitable for the expansion of very large data centers. TRILL is generated in response to these shortcomings of STP. It is a technology generated for data center applications. The TRILL protocol combines Layer 2 configuration and flexibility with Layer 3 and The scales are effectively combined. If the second layer does not need to be configured, the whole network can be forwarded without loops. TRILL technology is the basic feature of the second layer of the data center switch, which is not available in ordinary switches.

(5) FCoE technology

Traditional data centers often have a data network and a storage network, and the convergence of the new generation of data center networks is becoming more and more obvious. The emergence of FCoE technology makes network convergence possible. FCoE encapsulates the data frames of the storage network. A technique for forwarding within an Ethernet frame. The implementation of this convergence technology must be in the switch of the data center, the general switch generally does not support FCoE function. 1) The difference between ports

The number of conventional switch ports is generally 24-48. Most of the network ports are Gigabit Ethernet or 100M Ethernet ports. The main function is to access user data or aggregate switch data of some access layers. Configure VLAN simple routing protocol and some simple SNMP functions. The backplane bandwidth is relatively small.

The number of core switch ports is large, usually modular, and can be freely matched with optical ports and Gigabit Ethernet ports. The general core switches are Layer 3 switches, and various advanced network protocols such as routing protocol/ACL/QoS/load balancing can be set. The main point is that the backplane bandwidth of the core switch is much higher than that of the conventional switch, and usually has a separate engine module and is the primary backup.

Fourth, POE switch technology and advantages

There are two standards for mainstream PoE switches in the market. IEEE802.3af and 802.3 at defining the power supply of 15.4W and 30W respectively. However, due to the loss during transmission, the actual power supply is 12.95W and 25.5W. For DC 48v.

When using a PoE switch that supports the IEEE802.3af standard, the power of the powered device cannot exceed 12.95W. When the PoE switch of the IEEE802.3at standard is used in the same way, the power of the powered device cannot exceed 25.5W.

Generally, a PoE switch that supports the IEEE802.3af/at standard at the same time, the power supply is adaptive. For example, it is connected to a 5W device, which provides 5W of power; if it is connected to a 20W device, it provides 20W of power.

A PoE switch is a switch that supports power supply to a network cable. Compared with a normal switch, a terminal (such as an AP, a digital camera, etc.) does not need to perform power supply wiring, and the reliability is higher for the entire network. In addition to providing the transmission function of a common switch, a PoE switch can also provide power supply to the other end of the network cable.

The PoE back-end device only needs one network cable, which saves space and can be moved at will (simple and convenient), saving costs.

As long as the PoE switch is connected to the UPS, it can supply power to all POE-related devices on the back end when the power is off. Users can automatically and securely mix legacy devices and PoE devices on the network, which can coexist with existing Ethernet cables.

Be the first to comment

Leave a Reply

Your email address will not be published.


*