Small enterprise network setup


















WTD drops packets from the queue, based on dscp value, and the associated threshold. If the threshold is exceeded for a given internal DSCP value, the switch drops the packet. Each queue has three threshold values. The internal DSCP determines which of the three threshold values is applied to the frame. Two of the three thresholds are configurable explicit and one is not implicit.

This last threshold corresponds to the tail of the queue percent limit. Figure depicts how different class-of-service applications are mapped to the Ingress Queue structure 1P1Q3T and how each queue is assigned a different WTD threshold. The DSCP marked packets in the policy-map must be assigned to the appropriate queue and each queue must be configured with the recommended WTD threshold as defined in Figure The following ingress queue configuration must be enabled in global configuration mode on every access-layer switch.

The QoS implementation for egress traffic toward the network edge on access-layer switches is much simpler than the ingress traffic QoS. The egress QoS implementation provides optimal queueing policies for each class and sets the drop thresholds to prevent network congestion and application performance impact.

Cisco Catalyst switches support 4 hardware queues that are assigned the following policies:. As a best practice each physical or logical link must diversify bandwidth assignment to map with hardware queues:. Figure shows best practice egress queue bandwidth allocation for each class. Given these minimum queuing requirements and bandwidth allocation recommendations, the following application classes can be mapped to the respective queues:.

Congestion avoidance mechanisms i. If configurable drop thresholds are supported on the platform, these may be enabled to provide intra-queue QoS to these application classes, in the respective order they are listed such that control plane protocols receive the highest level of QoS within a given queue.

If configurable drop thresholds are supported on the platform, these may be enabled to provide inter-queue QoS to drop scavenger traffic ahead of bulk data. The egress queueing is designed to map traffic, based on DSCP value, to four egress queues.

DSCP marked packets are assigned to the appropriate queue and each queue is configured with appropriate WTD threshold as defined in Figure Egress queueing is the same on network edge port as well as on uplink connected to internal network, and it is independent of trust mode.

The following egress queue configuration in global configuration mode, must be enabled on every access-layer switch in the network. Table and Table summarize the ingress and egress QoS policies at the access-layer for several types of validated endpoints.

All connections between internal network devices that are deployed within the network domain boundary are classified as trusted devices and follow the same QoS best practices recommended in the previous section. Ingress and egress core QoS policies are simpler than those applied at the network edge, See Figure The core network devices are considered trusted and rely on the access-switch to properly mark DSCP values.

The core network is deployed to ensure consistent differentiated QoS service across the network. This ensures there is no service quality degradation for high-priority traffic, such as IP telephony or video. The QoS implementation at the main and remote large site differ from the remote small site, due to different platforms used as the collapsed core router Catalyst E vs Catalyst X StackWise Plus.

No ingress QoS configuration is required, since QoS is enabled by default, and all ports are considered trusted. After QoS is globally enabled, all interfaces are in the untrusted mode by default. QoS trust settings must be set on each Layer 2 or Layer 3 port that is physically connected to another device within the network trust boundary. When Cisco Catalyst is deployed in EtherChannel mode, the QoS trust settings must be applied to every physical member-link and logical port-channel interface.

Best practice is to enable trust DSCP settings on each physical and logical interface that connects to another internal trusted device e. Additional ingress QoS techniques such as classification, marking, and policing are not required at the collapsed core layer since these functions are already performed by the access-layer switches. The architecture of Catalyst E with classic or next-generation Supervisor do not need ingress queueing since all of the forwarding decisions are made centrally on the supervisor.

There are no additional QoS configurations required at the collapsed core-layer system. The QoS implementation remains the same whether deployed as X StackWise or as a standalone switch. By default, QoS is disabled on the X switch. Following is a sample configuration to enable QoS in global configuration mode:.

The ingress queuing configuration is consistent with the implementation at the access-edge. Following is a sample configuration for the ingress queues of the Catalyst X StackWise collapsed core switch:.

Egress QoS from the collapsed core router provides optimized queueing and drop thresholds to drop excess low-priority traffic and protect high-priority traffic.

The Supervisor-6E supports up to 8 traffic classes for QoS mapping. DBL tracks the queue length for each traffic flow in the switch.

The egress QoS implementation bundles the queueing and policing functions on EtherChannel based networks. To provide low-latency for high priority traffic, all lower priority traffic must wait until the priority-queue is empty. Best practice includes implementing a policer along with the priority-queue to provide more fair treatment for all traffic.

The following sample configuration shows how to create an 8-class egress queueing model and protect from high-priority traffic consuming more bandwidth than global policies allow. The egress QoS service-policy must be applied to all the physical EtherChannel member-links connected to different service-blocks i.

EtherChannel is an aggregated logical bundle interface that does not perform queueing and relies on individual member-links to queue egress traffic.

The policer to rate-limit priority class traffic must be implemented on EtherChannel and not on individual member-links since it governs the aggregate egress traffic limits. The following additional policy-map must be created to classify priority-queue class traffic and rate-limit the traffic to 30 percent of egress link capacity:. If the remote large site network is deployed with Sup-6E or Sup-6LE, then the configuration is the same as described in the previous section.

The QoS deployment and implementation guidelines differ when the Cisco Catalyst E is deployed with the classic Supervisor-V module. Before forwarding egress traffic, each packet must be internally classified and placed in the appropriate egress-queue.

Placing traffic into different class-of-service queues, will offer traffic prioritization and guaranteed bandwidth to the network. The Catalyst X can have up to four egress queues. Before forwarding egress traffic, each packet is placed in the appropriate egress-queue as shown in Figure Traffic load exceeding the shape parameter gets dropped. The queue cannot take advantage of excess bandwidth capacity when other queues are not using their bandwidth allocations.

The following sample configuration shows how to implement egress QoS on the Catalyst X:. The Small Enterprise Design Profile is a high performance, resilient and scalable network design. A network outage may be caused by the system, human error, or natural disaster. The small enterprise network is designed to minimize the impact of a failure regardless of the cause.

Network outages may be either planned or unplanned. Such outages may be caused by internal faults in the network, or devices due to hardware or software malfunctions. The network is designed to recover from most un planned outages in less than a second milliseconds. In many situations, the user will not even notice the outage occurred. If the outage lasts longer several seconds , then the user will notice the lack of application responsiveness.

The network is designed to minimize the overall impact of a unplanned network outage, and gracefully adjust and recover from many outage conditions. Figure shows an example of a real-time VoIP application and user impact depending on duration of outage event. Several techniques are used to make the network design more resilient. Deploying redundant devices and redundant connections between devices, enables the network to recover from fault conditions.

Identifying critical versus non critical applications, and network resources optimizes cost performance, by focusing on the most important elements of the network design. The resiliency of a system design is often categorized as follows:. The high availability framework is based upon the three resiliency categories described in the previous section.

Figure shows which technologies are implemented to achieve each category of resiliency. Redundant hardware implementations vary between fixed configuration and modular Cisco Catalyst switches.

Selective deployment of redundant hardware is an important element of the Small Enterprise Design Profile which delivers device resiliency. Redundant hardware component for device resiliency varies between fixed configuration and modular Cisco Catalyst switches. To protect against common network faults or resets, all critical main and remote site campus network devices must be deployed with similar device resiliency configuration.

This subsection provides a basic redundant hardware deployment guideline at the access-layer and collapsed core switching platforms in the campus network. Redundant power supplies protect the device from power outage or power supply failure.

Protecting the power is not only important for the network device, but also the endpoints that rely on power delivery over the Ethernet network. Redundant power supplies are deployed differently depending on the switch type:. A single Cisco RPS power supply has modular power supplies and fans to deliver power to multiple switches. Deploying internal and external power-supplies provides a redundant power solution for fixed configuration switches.

Redundant network connections protect the system from failure due to cable or transceiver faults. Redundant network connections attached to a single fixed configuration switch or network module in the Cisco Catalyst switch do not protect against internal device hardware or software fault.

Best practice design is to deploy redundant network modules within the Catalyst switch and the Cisco X StackWise Plus solution in the small remote site collapsed core network. Deploying the X StackWise Plus in critical access-layer switches in the serverfarm network and in the main site is also best practice. Connecting redundant paths to different hardware elements provides both network and device resiliency.

The processing software operation is different in standalone or StackWise fixed configuration switches, and on a supervisor module of a modular switch. Network communication and forwarding operations can be disrupted when the processing unit fails, causing a network outage. Network recovery techniques vary based on the different platforms. The standalone and non-stackable fixed configuration switches like the Cisco Catalyst or E feature power redundancy and network resiliency support; however they do not protect against a processing unit failure.

During a processing unit failure event, all endpoints attached to the switch are impacted and network recovery time is undeterministic. Cisco Catalyst X switches can be deployed in StackWise mode using a special stack cable. Up to nine switches can be integrated into a single stack that delivers distributed forwarding architecture and unified single control and management plane.

Device level redundancy in StackWise mode is achieved via stacking multiple switches using the Cisco StackWise technology. One switch from the stack is selected automatically to serve as the master, which manages the centralized control-plane process.

Cisco StackWise solution provides 1:N redundancy. In the event of a active master-switch outage, a new master is selected automatically. Since Cisco StackWise enables up to 9 switches to appear as one logical switch, it has centralized management and control functions. Most Layer 2 and Layer 3 functions are centrally performed, however Layer-2 topology development is distributed i.

Table lists network protocol functions and identifies which are centralized and which are distributed. Cisco StackWise Plus solution offers network and device resiliency with distributed forwarding.

In the event of a master switch outage, Non-Stop Forwarding NSF enables packet forwarding to continue based on current state information, while a new master switch is selected. New master switch selection is accomplished in the range of to milliseconds; the amount of time to reestablish the control-plane and develop distributed forwarding will vary depending on the size and complexity of the network.

Following is a best practice to reduce Layer-3 disruption in the event of a master switch outage: Determine the master switch with the higher switch priority, and isolate the uplink Layer-3 EtherChannel bundle path by using physical ports from member switches i. With NSF capabilities enabled, this design decreases network downtime during a master-switch outage.

An understanding of SSO and StackWise components and failover events associated with NSF provides significant insight in designing a network that enables supervisor redundancy. The following subsection uses the above concepts and principles to identify the design parameters, and applies them to develop a best-practice hierarchical network with the highest availability.

The next-generation Catalyst S Series Layer 2 access-switch introduces high-speed, low-latency stacking capability based on "pay as you grow" model. Following the Catalyst X StackWise Plus success, the Catalyst S model offers high availability, increased port-density with unified single control-plane and management to reduce the cost for small enterprise network. The Cisco FlexStack is comprised with hardware module and software capabilities. The FlexStack module must be installed on each Catalyst S switches that are intended to be deployed in stack-group.

Cisco FlexStack module is hot-swappable module providing flexibility to deploy FlexStack without impacting business network operation. Cisco FlexStack allows up to four Catalyst S Series switches into a single stacking group; it is recommended to deploy each switch member with dual FlexStack cable to provide increased 20G bidirectional stack bandwidth capacity and FlexStack redundancy.

The FlexStack protocol dynamically detects switch member and allows it to join the stack group if all stacking criteria is met. The unique data forwarding architecture and FlexStack QoS is on per-hop basis, the unknown unicast, broadcast, and multicast traffic will be flooded between stack group switch members.

The FlexStack protocol detects and breaks the loop between the FlexStack group switches. Once the destination switch member is determined, Catalyst S use shortest egress stack port path to forward traffic.

Any packet traverses across FlexStack is encapsulated with 32 bytes of FlexStack header carrying unique information to provide centralized control-plane and distributed forwarding design. When deployed along with NSF, the E provides a enterprise-class highly available system with network and device resiliency. SSO is a Cisco IOS service used to synchronize critical forwarding and protocol state information between redundant supervisors configured in a single chassis.

With SSO enabled, one supervisor in the system assumes the role of active and the other supervisor becomes the hot-standby. Each is ready to backup the other, thus providing hot redundancy to protect from a control-plane outage. Since both supervisors are active, the system benefits by using the physical ports from both supervisors during normal operation. NSF enables packets to continue to be forwarded using existing routing table information, during switchover. NSF also provides graceful restart to the routing protocol such that during the failover, the routing protocol remains aware of the change and does not react by resetting its adjacency.

If the routing protocol were to react to the failure event, and alter routing path information, the effectiveness of stateful switch over would be diminished.

Designing the network to recover from unplanned outages is important. It is also important to consider how to minimize the disruption caused by planned outages. These planned outages can be due to standard operational processes, configuration changes, software and hardware upgrades, etc.

The same redundant components which mitigate the impact of unplanned outages can also be used to minimize the disruption caused by planned outages. The ability to upgrade individual devices without taking them out of service is enabled by having internal component redundancy such as with power supplies, and supervisors complemented with the system software capabilities.

Two primary mechanisms exist to upgrade software in a live network:. Validating operational resiliency is beyond the scope of this design guide, refer to CCO documentation for deployment guidelines. Many of the design features of the Small Enterprise Design Profile which were described in "Deploying Network Foundation Services" section , contribute to the network high availability capabilities. This section focuses on how to implement additional features which complete the high availability design in small enterprise network design.

Etherchannel and UDLD are two design features which are included in the network foundation services, which contribute to network resiliency. Poor signaling or a loose connection may cause continuous port-flap port alternates between active state and inactive state. A single interface flapping can impact the stability and availability of the network.

Route summarization is one technique which mitigates the impact of a flapping port. Summarization isolates the fault domain with a new metric announcement by the aggregator and thereby hides the local networks fault within the domain. A best practice to mitigate local network domain instability due to port-flap, is implementing IP Event Dampening on all layer 3 interfaces.

Each time the Layer-3 interface flaps the IP dampening tracks and records the flap event. Upon multiple flaps, a logical penalty is assigned to the port and suppresses link status notification to IP routing until the port becomes stable.

IP event dampening is a local function and does not have a signaling mechanism to communicate with a remote system. It can be implemented on each individual physical or logical Layer-3 interface: physical ports, SVI or port-channels. Following is a example configuration to implement IP Event Damenping:.

The following output illustrates how the IP event dampening keeps track of port flaps and makes a decision to notify IP routing process based on interface suppression status:. In a multilayer access-distribution design, the Layer-2 and Layer-3 demarcation is at the collapsed core-distribution device. IP event dampening becomes more effective when each access-layer switch is deployed with a unique set of Layer-2 VLANs.

Assigning unique VLANs on each access-layer switch also helps IP event dampening to isolate the problem and prevent network faults triggered in a multilayer network. The following output illustrates how IP event dampening keeps track of individual logical VLAN networks associated to same Layer-2 physical trunk ports. When a Layer-2 trunk port flaps, the state of SVI also flaps, and forces dampening to track and penalize unstable interfaces:. As described earlier, redundant hardware is an important technique for achieving device resiliency.

Redundant power supplies can prevent a system outage due to power outage, power supply or fan hardware failure. The Cisco Catalyst E provides power to internal hardware components and external devices like IP phones.

All the power is provided by the internal power supply. Dual-power supplies in the Catalyst E can operate in one of two different modes:. The system determines power capacity and number of power supplies required based on the power required for all internal and external power components. The following global configuration will enable power supplies to operate in combined mode:.

The next-generation Catalyst X Series platform introduces innovative Cisco StackPower technology to provide power redundancy solution for fixed configuration switches. Cisco StackPower unifies the individual power supplies installed in the switches and creates a pool of power, directing that power where it is needed. Up to four switches can be configured in a StackPower stack with the special Cisco proprietary StackPower cable.

During individual power supply fault from the stack can regain power from global power pool to provide seamless operation in the network. With the modular power supply design in Catalyst X series platform the defective power supply can be swapped without disrupting network operation. The Cisco StackPower can be deployed in following two modes:. The total aggregated available power in all switches in the power stack up to four is treated as a single large power supply.

All switches in stack can share power with available power to all powered devices connected to PoE ports. In this mode, the total available power is used for power budgeting decisions and no power is reserved to accommodate power-supply failures.

If a power supply fails, powered devices and switches could be shut down. Default mode. Although there is less available power in the pool for switches and powered devices to draw from, the possibility of having to shut down switches or powered devices in case of a power failure or extreme power load is reduced. It is recommended to budget the required power and deploy each Catalyst X switch in stack with dual power supply to meet the need. Enabling redundant mode will offer power redundancy as a backup during one of the power supply unit failure event.

Since Cisco StackWise Plus can group up to nine X Series switches in the stack-ring, the Cisco StackPower must be deployed with two power stack group to accommodate up to four switches. Following sample configuration demonstrate deploying Cisco StackPower redundancy mode and grouping the stack-member into power stack group, to make new power configuration effective, it is important that network administrator must plan downtime as all the switches in the stack ring must be reloaded:.

Upto two devices can be protected by Cisco RPS against power or power supply failure. Additional power resiliency on Cisco RPS can be added by deploying dual power supply to backup to two devices simultaneously. Note that external power redundancy requires special RPS cable and specific models currently do not support external power redundancy. Deploying external power redundancy on Cisco Catalyst and S with Cisco RPS is performed automatically and do not require any extra configuration.

The collapsed core device in the main and remote sites Catalyst E or X StackWise Plus is deployed with redundant supervisor, or StackWise Plus to enable graceful recovery from switch hardware outage.

Any access-switch which is deemed critical may be deployed as StackWise Plus and FlexStack to improve device resiliency. The implementation for each switch is different, and is discussed separately in the sections which follow.

Cisco X StackWise Plus is deployed for the collapsed core in the small remote site network. Cisco IOS automatically adjusts the interface addressing and its associated configuration based on the number of provisioned switches in the stack.

The centralized control-plane and management plane is managed by the master switch in the stack. By default, the master switch selection within the ring is performed dynamically by negotiating several parameters and capabilities between each switch within the stack. Each StackWise-capable switch is by default configured with priority 1. This means that all the centralized Layer-3 functions must be reestablished with the neighbor switch during a master-switch outage.

To minimize the control-plane impact and improve network convergence, the Layer-3 uplinks should be diverse, originating from member switches instead of the master switch. Below is an example of small office network with one server. Along with server, you have firewall and router. As you can see firewall is the first line of defense, the icon next to Internet. You can use another firewall after WIFI access point to protect all laptops, desktops or smartphones who access office network wirelessly.

This is just a simple example of LAN local area network infrastructure. You can always customize and make it suitable for devices in your company. Some of the networked computers can be easily replaced with the printer, fax machine or IP phone. In this office network diagram , we did not add modem since your Internet service provider ISP will provide and setup modem for you.

The network topology is crucial for the reliable and redundant network. There are other things to take into the consideration; IP addressing and subnet management, choosing internal domain names, data and network separation for better performance, licensing requirements , etc. All manufacturers have generic or default information for router access. Change them to avoid intrusion into your network. When finishing WIFI network setup, create a password and prevent unauthorized users connect to the business network.

If you are running a restaurant or similar business, you can set dedicated WIFI network for guests and visitors only. That was the main reason to set up office network in the first place. This guide is for Windows 10 users , but very similar procedure stands for other Windows versions. For other versions visit Microsoft support portal. Turn on Sharing Options — When you connect to the network for the first time set sharing options manually for network discovery, network sharing, file sharing and printer sharing.

Network Discovery — Network discovery allow other users on the network to see your computer and vice versa. You can turn off network visibility or set Custom visibility. File and Printer Sharing — Follow the same procedure for file and printer sharing. At the end click Turn on file and printer sharing. To share file or folder press right mouse button and click Share With. Now select people that you want to share file or folder with. Important : Use file and printer sharing options only when you are certain that other computers are virus-free, and you can trust them.

Setup and Manage Workgroups When you set up small office network Windows will automatically create Workgroup and give her a name. A workgroup is a group of computers within the same network that are connected in order to have sharing options, files or printers for example. You can reach our desktop support team at Physical servers are the core of your network. Servers can host your line of business applications such as Quickbooks or Sage MaaS.

Network Servers also support user authentication, DNS, and other network functions necessary for computer communications. Your office network server can also be your centralized storage choice if you are not using cloud storage for your files. Your network server needs to be the right size to support your network devices and computers properly. This is one component of your network that you might need help with.

To know exactly what type of network server you will need, please contact our network team. There many variables we need to consider to give you proper server size specifications. To size a server, things like applications, amount of users, storage requirements, and security requirements have to be considered. You can call our network team at , and we can help you size your server free. Protecting your business from cyber threats is paramount.

Today more than ever, businesses have to take proper steps to protect their data and files from cyber thieves. Your business will also need Microsoft Office tools to be able to process documents, spreadsheets, and email. Endpoint protection software is the shield and protection for your computers.

Your firewall will protect you from outside threats trying to make it to your computers, but if a threat makes it to your computer, your Endpoint protection will stop it. Many businesses fall victim to cyber-attacks, and the trends are growing.

Businesses are also failing to recognize the importance of encryption software in an office environment. Simply put, encryption software can protect your business even when firewalls, intrusion prevention, and even antiviruses fail. If your files are stolen, they will be useless to the crook as the encryption will not allow the files to be read, and your data, although stolen, will still be protected. Building and configuring a small business network can be a challenging task.

Ideally, businesses should install two standalone units in the room, on separate circuit breakers, for redundancy. This also allows them to be alternated regularly for servicing. Proper cable management see next section also helps ensure proper ventilation. Not only does this create cabling constraints, older server chassis may need 1U to 2U of space between each other to ensure adequate airflow.

Setting up a server rack is more than just twisting a few screws to secure the equipment into place. Intra-cabinet wiring aside, it makes sense to terminate cable runs for Ethernet LAN points for desktop computers, IP cameras and other network appliances at the rack.

The best way to properly manage all these cables is to use an RJ45 patch panel to terminate Ethernet cable runs. The typical patch panel installs in 1U of space and offers up to 24 ports. Using a patch panel does require some hands-on work — stripping a cable, punching it into the patch panel and using a wire tester tool to verify the connectivity. If hiring a professional is in the budget, he or she can probably get everything installed in less than a day.



0コメント

  • 1000 / 1000