Author Archive

what is network uptime

What is Network Uptime? | July 10th, 2020

Your business network is an invaluable part of your organization. Whether you’re inputting sensitive patient data, managing construction projects or completing administrative tasks, network uptime supports smooth operations. Though uptime is vital, it isn’t always guaranteed. Let’s take a deeper look at uptime and ways you can optimize it for your business.

What Does Network Uptime Mean?

Network uptime refers to the time when a network is up and running. In contrast, downtime is the time when a network is unavailable. A network’s uptime is typically measured by calculating the ratio of uptime to downtime within a year, then expressing that ratio as a percentage.

The concept of “five-nines” — a network availability of 99.999% — has been an industry gold standard for many years. This uptime percentage translates to about 5.26 minutes of unplanned downtime a year. Though five-nines or 100% uptime rates may be difficult to achieve, getting as close as possible is a worthwhile pursuit. Your business likely feels the impact of even a fraction of uptime difference. Making sure your service provider meets your requirements can help you minimize the costs of unplanned downtime.

Service Level Agreements and Uptime

Service level agreements (SLAs) promise a set of performance standards between a service provider and their client. In an SLA, a provider may:

  • Identify customer needs
  • Provide a foundation client comprehension
  • Address potential conflicts
  • Create a space for dialogue
  • Discuss practical expectations

SLAs can help you determine whether a service provider meets your company’s needs and wants. The central components of an SLA are uptime, packet delivery and latency. While successful packet delivery and low latency are important, uptime is an especially crucial component to consider. Network service with high availability translates to maximum profitability for your business.

The Costs of Downtime

Network failure is a huge inconvenience, but even the best systems confront unforeseen issues. A power outage, for example, could cause hardware failures and threaten network reliability. In this scenario, you could increase your network uptime with a backup power supply. But if you haven’t planned for the situation, you might face extra difficulties taking reactive measures and returning your network to normal function.

Network downtime can cost a business thousands of dollars each minute, which makes 24/7 network monitoring essential for many industries. With Worldwide Services, you can protect your business from downtime with our recurring network and IT maintenance. Worldwide Services can help you prevent unnecessary network outages and prepare for when they occur. We’re experienced with a variety of different industries and are equipped to make uptime minutes count.

How to Determine Server Uptime

You can calculate your network uptime with some simple math:

  • 24 hours per day x 365 days per year = 8,760 hours per year
  • Number of hours your network is up and running per year ÷ 8,760 hours per year x 100 = Yearly uptime percentage

For example, if your network is down for one hour total during an entire year, this is how you calculate your network uptime:

  • 8,759 hours ÷ 8,760 hours = 0.99988
  • 0.99988 x 100 = 99.988%

You can also use free or paid website monitoring services to check your server uptime. A website monitoring service tracks and tests your servers and may send an alert if something goes wrong. Besides checking network uptime, comprehensive monitoring services offer feature-heavy programs to keep your business operational during network disturbances.

Worldwide Services can support your network by resolving hardware failure, managing your network performance and providing 24/7 network monitoring through our network operations center (NOC) services. With our services, you can significantly decrease network downtime and maintain optimal customer satisfaction. Since our IT management team handles your network issues, your staff can be more productive at what they do best. You can also stay up-to-date about what’s going on with your network with our real-time tracking services.

How to Improve Network Uptime

Your business can learn how to increase network uptime by analyzing the structure of your network architecture. Network architecture is typically composed of four main parts — the core network, interconnection networks, access networks and customer applications. The core network is the network component from which we expect optimal performance, or five-nines. The core network is also essential to the other parts of the network as its functions support customers who are interconnected with the access network.

From the access network, clients can open customer applications. But if there is a problem with the access network, such as the local area network (LAN), clients may receive less than optimal results. The LAN may be negatively affected by the infrastructure of the provider’s network terminating unit (NTU) that connects the customer’s equipment on location with the network.

Decreasing downtime starts with identifying potential points of failure like above and addressing them before they cause issues. These are our top network uptime best practices and ways to improve network uptime for your business.

1. IT Mapping

When you assess the core components of your network architecture, you can create an IT map detailing network device availability and network health. The map should show all your IT assets and services, including inventory hardware, software, and relevant locations and vendors. When completed, you can use the IT map to:

  • Note how network components are connected with one another.
  • Consider how one failure might affect another device or functionality in the overall IT system.
  • Identify what components are most essential.
  • Note unnecessary redundancies and potential issues with physical resources.
  • Look for vulnerabilities and re-organize accordingly.

In addition to hardware, it’s a good idea to get a headcount of all the other IT resources that are critical to the system. This may include:

  • Human resources
  • Budget
  • Executive officials
  • End users

Map these resources in regards to their qualitative and quantitative effects. Operational budgets, for example, could be mapped to recovering your IT system.

2. Hardware Warranties

The migration from physical systems to cloud services has lessened the weight many businesses once carried knowing they could lose vital infrastructure. Though cloud services are on the rise, many businesses still rely on smaller devices like projectors or tablets for essential functions. While repairs can be an option, relying on a warranty for hardware repairs is usually the better option.

If a piece of hardware is still under warranty, you shouldn’t have to pay for repairs or replacement, which helps you minimize the total costs of system downtime. It can be helpful to keep track of how long a warranty lasts for a piece of hardware, what’s covered under the warranty and which pieces of hardware are reaching the end of their warranty. If a piece of hardware is nearing the end of its warranty, compare the costs of repairing it and replacing it with upgraded hardware.

3. Software Management

It’s also helpful to keep track of your software, whether you have Software-as-a-Service (SaaS) subscriptions or local programs. A system performance management (SPM) provider can help you manage your software inventory, including titles, upgrades and deployment. The most useful SPM programs have holistic functionality that also lets you monitor overall network health by collecting and analyzing other operational metrics.

With a solutions-focused SPM provider, you’ll only need one platform to manage your network performance. Effective SPM programs should be able to manage and solve your network issues, all while keeping you in the loop with automatic updates.

4. Faster Connections

Faster Ethernet connections can help prevent outages due to traffic overload. Many businesses connect their servers to the internet with Ethernet connections that run at 10 gigabits per second. To support uptime, consider switching to a faster Ethernet speed like 40 gigabits per second. Depending on your network, you may experience dramatic spikes in usage that can bog down a slower Ethernet connection. A 40-gigabit per second router-to-router link can keep things running smoothly for everybody.

5. Security Patches

It’s common for security updates to take place immediately as they become available, but this timing can be cumbersome for your business. Most security patches require system restarts, which can disrupt your uptime during crucial operating hours. Plan patches for a time when you can increase your network’s safeguards and reduce disturbances.

When you trust Worldwide Services to maintain your server systems, our technical support team can help manage your security patches. With the right patch timing, you can enjoy better productivity, increased security strength and greater regulatory compliance.

6. Caches

A cache is a data-layer stored in a computer’s random access memory (RAM), which operates with much higher speeds than standard hardware storage. Its basic use is to recall small amounts of application or web information that may be useful when a user returns to a location they’ve already visited.

Caching uses caches to store data in memory so it can be accessed easily later on. In the event of network downtime, a slow connection or a traffic spike, users can still use cached content. Caching is the principle way popular social media sites can handle large network surges. With increased or improved caching, your business may be able to facilitate uptime when your network is under stress.

7. Performance Testing

Great network performance requires thorough attention to your network’s efficiency from every angle. Throughput, bandwidth and other metrics can all impact how well your network is running. Website monitoring tools usually have numerous features to help you track these metrics, including:

  • Domain lookup times
  • Uptime rates
  • Individual page element load times
  • Redirection times
  • First byte download times
  • Connection times

Perhaps the greatest benefit of network monitoring is its nonstop service. This level of surveillance keeps you in the loop with your network without the need for constant attention. Most application performance monitoring (APM) software can even give you a root cause of a problem, saving you the trouble and ultimately expediting the troubleshooting process.

8. Redundancy Building

Redundancy refers to any backup schemes that are in place in case of a network failure. This can occur in several ways:

  • Providers can use alternative network paths or replacement equipment to build a redundant system.
  • Businesses may stock extra switches and routers to swap out a failing unit quickly and diminish its effects.
  • Businesses may program network protocols to switch paths when an initial path has failed.
  • Businesses may connect subnets to multiple routers within a network. These routers can update one another on the best path for a signal.
  • Businesses may use two cables to make a connection. If one cable is disconnected, traffic can continue flowing through the other.

Wide access networks (WAN) were once the norm for network connections. But the rise of cloud computing has made experts question their reliability. Software-defined WAN (SD-WAN) offers another means of network redundancy. SD-WAN has the capacity to migrate network traffic to the internet once traditional systems have failed.

9. Emerging Technologies

HTML5 is an emerging software that improves upon HTML, the code that describes the layout of webpages. HTML5 can manage text, video and graphics without the need for any extra plugins. On its own, HTML only employs text function. Effective programming with HTML5 can lead to better network performance.

Managed IT services can also be considered an emerging technology. These services are one way to implement the above tips easily and effectively. Worldwide Services offers an array of solutions for your network needs, including:

  • Professional consulting and project management to secure your network
  • Repairs that extend asset recovery programs
  • Assistance planning, designing, building and operating your network

Work With a Network Maintenance Expert

Every business has network uptime needs that impact the welfare of their clients and their company. A reliable network can play a pivotal role in satisfying customers, improving productivity, increasing revenues and driving overall savings.

Maintaining your network should be a top priority for your business. Curtailing network failure begins at the hardware level. Worldwide Services can provide the third-party maintenance you need at lower costs with an increased return on investment. NetGuard, our around-the-clock technical assistance, keeps your best interests in mind, including saving money and increasing network availability. Contact us to get started today.

read more
What is network optimization

What is Network Optimization | June 12th, 2020

Network optimization encompasses the complete set of technologies and strategies a business deploys to improve its network domain functionality. Network and network domain refer to your organization’s set of hardware devices, plus the software and supportive technology allowing those devices to connect and communicate with one another.

One of the primary goals of network optimization is to provide the best possible network experience for users. We’ll cover the areas where organizations can begin to improve these connections — and what they stand to benefit from even small boosts in network optimization.

Why Is Network Optimization Important?

Network optimization works to enhance the speed, security and reliability of your company’s IT ecosystem. Improving that ecosystem seems intuitive in theory, yet it is challenging to master.

Strains on networks continue to grow due to the following factors: 

  • More devices are being brought into the workplace.
  • More cybersecurity threats are maturing.
  • More software applications are being used.
  • More data is collected, aggregated and shared — often simultaneously.
  • More teams are going remote.
  • More external entities require access to your networks.

The result? Your in-office and remote employees, as well as your customers and clients, are unable to use relevant software, share documents, send messages and emails, access data, browse your domain, make purchases or read your company blog from any digital device.

In short, network optimization is essential for business activities that require 24/7 access and real-time usage of digital technology. 

How to Measure Network Optimization Strategies

IT teams use several key metrics to track a successful optimization scheme. These metrics are most effective when viewed together to provide a holistic picture of your network’s strengths and weaknesses. Consult our guide here for deeper network monitoring and analytics to track.

1. Traffic Usage

Traffic usage, or utilization, displays which parts of your network are the busiest and which tend to stay idle. Utilization also gauges the times when “peak” traffic occurs. To measure these differing streams of network traffic, IT teams calculate a ratio between current network traffic and the peak amounts networks are supposed to handle, represented as a percentage.

By tracking these usage percentages and peaks, your team can better understand what networks see the most usage internally from office employees and externally from customers and prospects. This information allows you to prioritize updates and security layers according to what is best for the network.

2. Latency

Latency refers to delays in network devices communicating with one another. In IT, these communication streams are known as “packets” and come in two forms: one-way or round trip.

Both one-way and round-trip packets allow data to be exchanged across a network, which is at the core of all functioning network connections. Frequent latency suggests traffic and bandwidth congestion may be slowing everything from webpage loading speed to VoIP calls.

network optimization helps with latency

3. Availability vs. Downtime

A network’s availability metrics reveal how often particular hardware or software functions as it should. For example, businesses can track the availability scores of everything from SD-WANs and servers to specific business apps or websites.

Many IT network ecosystems aim for the goal of availability in five nines, which is an industry term for functioning properly 99.999% of the time. It’s debated whether five nines availability is possible, as it encompasses less than 30 seconds of total downtime a month. Regardless, the high goal sets a gold standard for availability that keeps your network running reliably.

4. Network Jitter

Network jitter rates reveal how often data packets get interrupted. Properly optimized networks have minimal jitter, meaning data deliveries between devices are efficient, quick and coherent. High jitter likely means network routers are overburdened and cannot properly handle incoming and outgoing data packets.

5. Packet Loss

Packet loss happens when data packets fail to reach their target endpoint on your network. Similar to network jitter, frequent instances of packet loss disrupt some of your most basic business functions, such as sending file attachments, conducting video calls or giving wireless presentations.

The Benefits of Network Optimization

Improving your network ensures your company’s technology operates to the best of its abilities. With a high-functioning network in place, you open your organization up to the following advantages across its full tech ecosystem:

  • Improved productivity: Employees have a higher capacity for productivity as they are liberated from the headaches of slow software or frequent downtime.
  • Faster network speed: Optimization makes the entire ecosystem more interconnected and equipped to send and receive data packets quicker.
  • Heightened security: Network optimization can ensure your applications offer improved, around-the-clock network visibility.
  • More reliability: With optimization, your network can handle the ever-increasing amount and complexity of data that is pivotal to daily operations.
  • Bolstered disaster recovery: In the event of physical damage to your hardware or cyberattacks, network optimization can help prevent data mismanagement or employee accidents.
  • Boosted customer experience: By improving the speed, navigability and functionality of your website, you can further encourage customer interactions and purchases.

Overall, the above advantages may result in a reduced need to purchase expensive hardware and software that turns obsolete within a few years.

benefits of network optimization

How to Improve Network Performance

The ideal network optimization scheme avoids overhauling your company’s existing set of hardware and software. Instead, it uses the lowest-cost methods to ensure better data flow via uninhibited traffic, often by tweakinnetwork maintenance and upkeep best practices.

There are a few network optimization strategies to improve network performance with maintenance practices you likely already support, including:

  • Data caching for a more flexible means of data storage and retrieval.
  • Traffic shaping to maximize the speed and access to your highest-traffic network infrastructure.
  • Prioritizing SD-WAN over WAN, further improving traffic shaping and supporting the most business-critical pieces of your network. 
  • Eliminating redundant data clogging network memory.
  • Data compressing to further eliminate redundant data and encourage more efficient data packet transfers. 
  • Router buffer tuning to minimize packet loss and direct smoother data transmissions. 
  • Data protocol streamlining, which bundles data and improves quality of service (QoS) across your network applications.
  • Application delivery suites that enhance how you see and track traffic across your network and control the flow and priorities of that traffic. 
  • Deploying flow visualization analytic software for 24/7 network monitoring.

Migrating from legacy architecture to cloud-based networks is likely the only major step in optimizing your network that may require new software.

Achieve Your Network Optimization Goals With Worldwide Services

A well-oiled network is at the heart of a high-functioning organization. Without optimizing your network, your business risks issues at every point in its IT ecosystem — from poor Wi-Fi connections and congested data storage to remote employees being unable to access software to perform their work.

Leverage your resources by partnering with a premier network-management service. Worldwide Services’ Network Monitoring and Infrastructure Support suite delivers: 

  • Incident management
  • Event monitoring and management
  • Reactive circuit support
  • Service request support
  • And many more network services

Request a quote today to maximize your network while experiencing cost savings.

read more
What is storage architecture

What Is Storage Architecture? | May 20th, 2020

The storage architecture of your system is a critical component of data transfer and accessing vital information. It provides the foundation for data access across an enterprise. Depending on your operations and the needs of your business, specific storage architectures might be necessary to enable employees to work to their fullest potential.

So what is IT storage architecture and how does it play into the everyday tasks you need to get done? To help you understand storage optimization, we’ve outlined the details of storage architecture and what you need to know to make informed decisions about the design and maintenance of one of the most critical components of your enterprise.

What Is Network Storage Architecture?

Network storage architecture refers to the physical and conceptual organization of a network that enables data transfer between storage devices and servers. It provides the backend for most enterprise-level operations and allows users to get what they need.

The setup of a storage architecture can dictate what aspects get prioritized, such as cost, speed, scalability or security. Since different businesses have different needs, what goes into IT storage architecture can be a big factor in the success and ease-of-use of everyday operations.

The two primary types of storage systems offer similar functions but vary widely in execution. These storage types include network-attached storage (NAS) and a storage area network (SAN).

employees seeing network storage achitecture

1. Network-Attached Storage (NAS)

A NAS system connects a computer with a network to deliver file-based data to other devices. The files are usually held on several storage drives arranged in a redundant array of independent disks (RAID), which helps to improve performance and data security. This user-friendly approach appears as a network-mounted volume. Security, administration and access are relatively easy to control.

NAS is popular for smaller operations, as it allows for local and remote filesharing, data redundancy, around-the-clock access and easy upgrading. Plus, it isn’t very expensive and is quite flexible. The downside to NAS is that server upgrades may be necessary to keep up with growing demand. It can also struggle with latency for large files. For small file sizes, it wouldn’t likely be noticeable, but if you work with large files like videos, this latency can interrupt many processes and significantly slow you down.

2. Storage Area Network (SAN)

SAN creates a storage system that works with consolidated, block data. It bypasses many of the restrictions caused by TCP/IP protocols and congestion on the local area network, giving it higher access speed than a NAS system. Part of the reason for this improvement in speed involves the way files are served. NAS uses Ethernet to access the files, which are then served over an incredibly high-speed fiber channel, allowing for fast access. NAS improves accessibility and appears to users like external hard drives.

Due to its complexity, SAN is often reserved for big businesses that have the capital and the IT department to manage it. For businesses with high-demand files like video, the low latency and high speeds of SAN are a significant benefit. It also fairly distributes and prioritizes bandwidth throughout the network, great for businesses with high-speed traffic like e-commerce websites. Other bonuses of SAN include expandability and block-level access to files. The biggest downside to SAN is its cost and challenges for upkeep, hence why it typically is used by large corporations.



Within these storage systems, you can find a wide variety of setups. Different structures can influence the performance of any given storage system. The components of these setups include:

  • The front end interface: Usually connected to the access layer of the server infrastructure, this interface is what allows users to interact with the data.
  • Master nodes: A master node is the one that communicates with the compute nodes using information from outside the system. It manages the compute nodes and takes care of monitoring resources and node states. Often, these are housed in a more powerful server than the compute nodes.
  • Compute nodes: A compute node helps to run a wide variety of operations like calculations, file manipulation and rendering.
  • A consistent file system: With a parallel file system shared across the server cluster, compute nodes can access file types easily and offer better performance.
  • A high-speed fabric: Creating communication between nodes requires a fabric that offers low latency and high bandwidth. Gigabit Ethernet and Infiniband technologies are the primary options.

Below are some of the styles of architecture you may find.

1. Multi-Tiered Model

With a multi-tiered data center, HTTP-based applications make good use of separate tiers for web, application and database servers. It allows for distinct separation between the tiers, which improves security and redundancy. Security-wise, if one tier is compromised, the others are generally safe with the help of firewalls between them. As for redundancy, if one server goes down or needs maintenance, other servers in the same tier can keep things moving.

2. Clustered Architecture

In a clustered system, data stays behind a single compute node. They don’t share memory between them. The input-output (I/O) path is short and direct, and the system’s interconnect has exceedingly low latency. This simple approach is actually the one that touts the most features because of how easy it is to add on data services.

One approach to the clustered architecture model is to layer “federation models” on top of them to scale it out somewhat. This bounces the I/O around until it reaches the node that contains the data. These federated layers require additional code to redirect data, which adds latency to the entire process.

3. Tightly-Coupled Architectures

These architectures distribute data between multiple nodes, running in parallel, and use a grid of multiple high-availability controllers. They have a significant amount of inter-node communication and work with several types of operations, but the master node organizes input processing. These systems were originally designed to make I/O paths symmetric throughout the nodes and limit how much drive failure can unbalance I/O operations.

With a more complex design, a tightly-coupled architecture requires much more code. This aspect limits the availability of data services, making them rarer in the core code stack. However, the more tightly coupled a storage architecture is, the better it can predictably provide low latency. Since tight coupling improves performance, it can be difficult to add nodes and scale up, which inevitably adds complexity to the entire system and opens you up to bugs.

storage architecture

4. Loosely Coupled Architectures

This type of system does not share memory between nodes. The data is distributed among them with a significant amount of inter-node communication on writes, which can make it expensive to run when you look at cycles. The data transmitted is transactional. Sometimes, low latency gets hidden in write locations that are themselves low-latency, like SSDs or NVRAM, but there is still going to be more movement in a loosely-coupled architecture, creating extra I/Os.

Similar to the tightly-coupled architecture, this one can also follow a “federation” pattern and scale out. Usually, it entails grouping nodes into subgroups with special nodes called mappers.

This architecture is relatively simple to use and good for distributed reads where data can be in multiple places. Since the data is in more than one spot, multiple nodes can hold it and speed up access. This factor makes this architecture particularly suited for server and storage software as well as hyper-convergence on transactional workloads.

Just as each node doesn’t share memory, they also don’t share code, which stands separate from other nodes. This design has a few effects. If the data is heavily distributed on writes, you’ll see higher latency and less efficiency in I/O operations per second (IOPS). If you have less distribution, you might get lower latency, but you won’t see as much parallelism on reading as you would otherwise. Finally, the loosely coupled architecture can offer all three options — low write latency, high parallelism and high scaling — if the data is sub-stratified and you don’t write a large number of copies.

5. Distributed Architectures

While it may look similar to a loosely coupled architecture, this approach works with non-transactional data. It does not share memory between the nodes, and data is distributed across them. The data gets chunked up on one node and occasionally distributed as a measure of security. This type of system uses object and non-POSIX filesystems.

This type of architecture is less common than many others but used by extremely large enterprises, as it works easily with petabytes of storage. Its parallel processing model and speed make it a great fit for search engines. It is incredibly scalable due to its chunking methods and its independence from transactional data. Due to its simplicity, a distributed, non-shared architecture is usually software-only and lacks any dependency on hardware.

employee working on storage hardware

What Are the Elements of Storage Architecture?

Designing a storage architecture is often a balance of different features. Improve one aspect, and you may worsen another. You’ll have to identify what features are most critical for your type of work and how you can most effectively get the most out of them. You’ll also need to balance the cost and the needs of the organization. Here are some of the most prevalent aspects of developing storage architecture.

elements of storage architecture

1. Data Pattern

Depending on the type of work you do, you may have a random or sequential pattern of I/O requests. Which type of pattern you work with most will affect the way that the components of the disk physically reach the area that contains the data.

  • Random: In a random pattern, the data is written and read at various locations on the disk platter, which can influence the effectiveness of a RAID system. The controller cache uses patterns to predict the data blocks it will need to access next for reading or writing. If the data is random, there is no pattern for it to work from. Another issue with a random pattern is the increase in seek time. With data spread out across data blocks, the disk head needs to move each time a piece of information is requested. The arm and disk head physically have to move there, which can add to the seek time and impact performance.
  • Sequential: The sequential pattern works, as you would imagine, in an ordered fashion. It is more structured and provides predictable data access. With this kind of layout, the RAID controller can more accurately guess which data blocks will need to be accessed next and cache that information. It boosts performance and keeps the arm from moving around as much. These sequential applications are usually built with throughput in mind. You’ll see sequential patterns with large filetypes, like video and backups, where they are written to the drive in continuous blocks.

In random workloads, the performance of the disk has to do with the spin speed and time it takes to access the data. As the disk moves faster, it offers more IOPS. In sequential operations, all three major disk types — SATA, SAS and SSD — offer similar performance levels. In general, though, sequential patterns often occur with large or streaming media files, which are best suited to SATA drives. Random patterns happen with small files or inconsistent storage requests, like those on virtual desktops. SAS and SSD are usually the best options for random patterns.

As far as spinning speeds and access times go, here’s how the drives compare.

  • SATA: SATA drives have relatively large disk platters that can struggle with random workloads due to their slow speed. The large platter size can cause longer seek times.
  • SAS: These drives have smaller platters with faster speeds. They can cut the seek time down significantly.
  • SSD: The SSD drive is excellent for extremely high-performance workloads. It has no moving parts, so seek times are almost nonexistent.

2. Layers

In data center storage architecture, you’ll typically see several layers of hardware that serve separate functions. These layers typically include the:

  • Core layer: This first layer creates the high-speed packet switching necessary for data transfer. It connects to many aggregation modules and uses a redundant design.
  • Aggregation layer: The aggregation layer is the place where traffic flows through and encounters services like a firewall,  network analysis, intrusion detection and more.
  • Access layer: This layer is where the servers and network physically link up. It involves switches, cabling and adapters to get everything connected and allow users to access the data.

3. Performance vs. Capacity

Disk drive capabilities are always changing. Just think about how expensive a 1 terabyte (TB) hard drive was only five years ago, and how the first 1 megabyte (MB) hard drive cost $1 million. Disk capacity used to be so low that SAN systems didn’t have to worry about the number of disks not creating enough IOPS per gigabyte (GB) — they had plenty. Nowadays, SATA drives and SAS drives can offer similar capacities, with the SATA drive using significantly fewer disks. Fewer disks reduce the number of IOPS generated per GB.

If your work involves a lot of random I/O interactions or extreme demand, using SATA disks can quickly cause your IOPS to bottleneck before you reach capacity. One option here is to front the disks with a solid-state cache, which can greatly improve random I/O performance.

4. RAID Considerations

If using a RAID system, you’ll have one more factor to think about: the parity penalty. This term refers to the performance cost of protecting data with RAID and only affects writes. If your work is write-sensitive, the parity penalty may affect you more since RAID is less stable when it comes to write tasks. Different levels of RAID protection can also affect the level of overhead.

Determining the level of overhead is a complex calculation, one that you can figure out with some information about your prospective system.

Remember that some drive types can benefit from different configurations. An SSD, for instance, can have a RAID1+0 configuration for better performance, while a SATA drive with a RAID6 configuration offers extra security during rebuilds and high capacity.

How Is Storage Architecture Designed?

Designing storage architecture asks us to look closely at the requirements set forth by the business and the environment. It probably goes without saying, but meetings and discussions will help determine your needs. You’ll also want to enlist professional services to help with the specifics and building the architecture itself.

Once you determine what your data pattern looks like, you can start to review aspects like:

  • Capacity needs
  • Throughput
  • IOPS
  • Additional functions, like replication or snapshots

If you can’t get data on these aspects, looking closely at your operating system and applications can get you started. If you find yourself with a random data pattern, try to balance capacity with IOPS requirements. For sequential workloads, prioritize capacity and throughput. Your MB per second (MB/s) ratings for sequential data will usually exceed requirements.

designing a storage structure

Tips for Designing a Storage Architecture

Of course, we can’t put everything you need to know about storage architecture in one article, but here are a few more of our tips to help you create the ideal storage structure without too much of a headache.

  • Evaluate cost from the outset: Keeping cost in mind as you design from the ground up allows you to make realistic decisions that will work in the long term. You wouldn’t want to end up with an architecture that needs to be reorganized right away because upkeep is too expensive or it doesn’t meet the company’s needs. Be realistic about the costs of a storage architecture so it fits within the business budget.
  • Find areas where you can compromise: You won’t be able to prioritize everything. In many instances, focusing on one aspect will hurt the quality of another. A high-performance system will be costly and could be less scalable. A scalable system might require more skilled administration and could lose speed. Talk with stakeholders about what aspects are necessary for the system and why so you can evaluate possible trade-offs with business needs in mind.
  • Work in phases: Your first draft is not going to be the same as the final. As you work through the project, you will encounter specific challenges and learn more about the technical details of your system. Try not to lock yourself into a plan and allow the architecture to change organically as you uncover more information.
  • Identify your needs first: While it may be tempting to dive right into the specific components that you want to use, identifying more abstract requirements is an excellent way to start. Think about the state of your data, what formats you’ll be working with and how you want it to communicate with the server. Try to develop as much information about the required tasks as you can. This approach allows you to work your way down the chain and find solutions that match the needs of more than one operation.

Work With an IT Expert

As you’ve probably gathered, an enterprise’s storage architecture is a complicated piece of technology. And it’s too foundational to try to piece together if you don’t know what you’re doing. That’s where IT experts come in.

Here at Worldwide Services, we know data, and we know businesses. Our team of professionals can design software architecture from the ground up with your company’s needs as their top priority. Whether you need a system that focuses on speed, scalability or something else, we can help. We can also provide maintenance for an existing storage architecture. To learn more about our services, reach out to us today.

work with an IT expert

read more
female employee working on network hardware

EoL vs. EoSL | April 10th, 2020

Imagine if you got a message from the manufacturer of your hardware saying they were discontinuing support for one of the most vital parts of your company. That would be a frustrating message to hear, and one that may send you scrambling for a solution. You might think you have to buy the latest hardware to continue receiving support. Or, you may think you’ll be stuck struggling with old, outdated equipment until you can convince higher-ups to spend money on new hardware. Fortunately, these aren’t your only options.

If you get a notice like this from the original equipment manufacturer (OEM), it typically means the equipment is entering one of two stages: End of Life (EoL) or End of Service Life (EoSL). So how is EoL different than EoSL? Keep reading to find out what these terms mean and how a third-party maintenance (TPM) solution can help you minimize their effects.

What Is EoL?

As the term suggests, an EoL product is at its End of Life. This typically means the manufacturer will not be producing any more of the item. The OEM might have a new generation coming up or a completely different product they want to focus on, so halting production allows them to refocus funds toward new developments. Usually, the OEM will still offer maintenance and post-warranty support on EoL products. The firmware is typically stable by this point, so you probably won’t have any updates or patches come through.

If your product has reached its EoL, it may be a good time to put new hardware on your radar. You can typically still get a few more years out of it — especially with the help of TPM — but eventually, you will need to get something new. Since substantial hardware updates often require significant capital, make sure the decision-makers know about potential replacement costs in the upcoming years. Advance notice may make the transition more manageable when you do need to purchase new hardware.

Another reason to stay on top of the life cycle of your equipment is to maximize resale value. For instance, you might have a product that currently has excellent resale value. If the OEM announces it is going EoL, it quickly loses value when the manufacturer stops offering the product, accessories and support. By staying up-to-date with EoL status announcements, you can more appropriately gauge and plan for resale.

What Is EoSL?

In looking at End of Life vs. End of Service Life, support is the biggest differentiating factor. The EoSL label is a little more final than EoL. At this stage, the OEM stops selling the product and won’t offer any more maintenance or support. If they do support the hardware in some way, they may charge you greatly for the service. You also won’t see firmware updates or patches for the product.

By this point, a piece of hardware has likely been out for a while, and the OEM might be trying to push a new technology or product line. Like with EoL products, you’ll want to keep new hardware in your sights. You can maximize your use for a while after the EoSL designation, but the product will probably still be on its way out the door. Technology advances quickly and, depending on the hardware, yours might become outdated rapidly.

Some issues that can pop up when a product reaches EoSL without you noticing include the following:

  • Decreased performance
  • Software compatibility issues
  • Security weaknesses, though patches may still come through or be available through other sources
  • Lower operating efficiency

Always be aware which stage of the IT life cycle your equipment is in so you can budget efficiently. In addition to saving you time and money, appropriately managing your EoSL equipment can be more convenient. For instance, updating your servers at the wrong time, like your busiest month, could be a real pain — slowing productivity and adding stress to the workday. It requires downtime, investment and change. TPM can help you manage your hardware until you’re ready to switch.

network hardware

What Is the Difference Between EoL and EoSL?

One of the main differences between EoL and EoSL products is service offerings. While the OEM may still offer support for EoL products, EoSL products no longer receive support. You’ll have to go through TPM or another service provider for any kind of assistance since the OEM has cut EoSL products off completely.

Aside from the maintenance aspect, the terms “End of Life” and “End of Service Life” are relatively close in meaning. Both signal the OEM cutting ties with the product and reducing or eliminating support. Usually, they do this to push marketing and development efforts for a new product. Plus, by eliminating or reducing support on the old product, they can often convince companies to upgrade before it is really necessary.

OEMs aren’t service companies. They make a profit by turning around new products and generating upgrades. OEMs charge a high amount for EoL services for a few reasons:

  • Their business model is not intended for long-term service. The high prices of their policy keep things profitable for them. Remember, they generally want to focus on creating and selling products, not servicing existing ones.
  • Their part supply diminishes after production stops. As OEMs run out of refurbished parts, they may have trouble procuring necessary components. During your initial warranty phase, which is usually two or three years after the purchase date, manufacturers have plenty of access to these components and often supply you with new ones as needed. The further your product gets from the EoL date, the harder and more expensive it is to get certified, quality parts.

What Can Your Organization Do With EoL or EoSL Equipment?

You might feel backed into a corner if your products are deemed EoL or EoSL. It may seem like you have to get a new product if you want to continue getting any support, which means spending money and losing the value of the old hardware. If you keep your equipment, you’ll have to proceed without OEM support. But how will you deal with technical difficulties or breakdowns? And how can you be confident that your hardware will work properly as new software, technology and security concerns arise?

In some cases, the OEM might offer you a “solution” with an exorbitant price for continued services. These prices may rise over time as the OEM expands further and further away from your product. Another “solution” you may try to adopt is to keep the hardware but attempt to handle maintenance and repairs on your own. That service often requires a higher level of expertise, and it also costs you money in terms of your IT department. You may need to hire someone to do that work and may struggle to find the right parts. For the same price, there are other options available that may fit your business better.

Worldwide Services employee working on hardware

Third-Party Maintenance (TPM) Solutions

A TPM company can step in when the OEM support ends and keep your hardware running smoothly for years after its EoL or EoSL. Just because manufacturers like Cisco and HP have stopped supporting a piece of hardware, doesn’t mean the hardware can’t continue to support your business.

You might not be ready for a sizable new purchase, especially if your existing equipment is working just fine. You can significantly extend the life of your EoL or EoSL hardware by looking to TPM. A TPM service can step in and perform repairs or fixes as needed long after the OEM’s support ends.

TPM companies typically have access to OEM parts through trusted channels or the OEMs themselves. They can provide expert service at the same or higher level of care as the OEM, but at a much lower cost. TPM solutions often exceed the original service level agreement, which can allow you to free up capital and use your hardware as you see fit.

Some of the benefits of working with TPM programs include the following.

1. Extended Hardware Lifespan

When a manufacturer stops offering support for a product, the equipment is not suddenly useless. It still maintains much of its original value, and if the product works for your company, it is convenient to keep using it. As long as there aren’t any glaring issues with the hardware or problems with its place in your company’s infrastructure, you can stretch your initial investment and keep the product working longer with TPM.

2. Sticking to Your Own Schedule

Maybe your hardware is going to reach its EoSL at a time when your company doesn’t have much extra capital to spend on upgrades. Or maybe it will happen during your busy season when a hardware upgrade might be too disruptive. Regardless of when it happens, you shouldn’t have to alter your schedule because of the manufacturer.

You have your own business to run and your schedule to balance. Plus, you don’t want to be rushed into a hasty decision because you feel like you need new hardware. By continuing to use your existing equipment with the help of TPM, you can choose to upgrade when you’re ready. You can prepare for the transition and have your finances in order before purchasing new equipment.

3. High-Quality Repairs

If you’re working on hardware yourself, you may not have easy access to OEM parts and may have to turn to less-than-trustworthy manufacturers to get what you need. Working with TPM entails more access to trusted parts that will work with your equipment. Plus, the technicians performing the repairs are experts in the field and may be more familiar with the equipment than in-house techs.

4. Lower Costs

Avoid the sky-high prices for post-EoL OEM services by working with a TPM company. With much lower prices and a similar skill set, they can get the job done without charging a premium.

In addition to savings, the scope of work included with TPM may be more expansive or personalized than what is offered by your OEM. You also get to push back the cost of new equipment. While an upgrade will still be necessary at some point, TPM can help keep it at bay.

5. Attention to Your Needs

While an OEM specializes in their equipment, TPM programs can focus more on the needs of your company. They might be able to perform a repair in a way that creates as little downtime as possible or offer more encompassing services that can provide a better deal. Some TPM options, like Worldwide Services, let you purchase only the services you need. You can even create a hybrid model with elements of OEM and TPM support if that’s what fits your business.

network hardware data center

What to Look for in a TPM Program

TPM is an excellent solution for EoL or EoSL hardware that you’re not ready to part with. It can help you extend the value of these tools and ensure you get to upgrade whenever you’re ready — not when your OEM says you should.

One factor to keep in mind when selecting TPM is the type of organization it is. According to Gartner, you can find two kinds of TPM:

  • Traditional TPM: Traditional TPM companies are independent support contractors that make most of their money from annual support contracts. They are typically more established and have investment funding.
  • Secondary hardware suppliers: Some TPM companies make most of their money from the resale of hardware. Many in this category started as resellers before offering TPM, so they have typically been around for a shorter time.

Traditional TPM companies tend to have the advantage of more experience and a devoted maintenance focus.

Whether you only need online technical support or you want a comprehensive maintenance solution, Worldwide Services can help. Our experts work to provide 24/7 access to assistance that fits your business needs. Our NetGuard program offers TPM support for over 200 current and legacy lines of OEM products from major manufacturers like Cisco and HP as well as lesser-known brands.

Partner With Worldwide Services

When it comes to maintaining EoL products, you need a company you can trust. We allow you to retain full control over your equipment, so all your sensitive data stays where you want it. Plus, you can save up to 50% on maintenance costs when implementing NetGuard. Paired with an established presence in the industry and our proven track record, NetGuard is a trustworthy, reliable option.

Our TPM solution can help protect your hardware investment and save you money. For more information on how Worldwide Services can benefit your company, contact us today.

Worldwide Services as a TPM

read more

Do I Need a Cisco Catalyst 9000? | March 13th, 2020

Cisco engineers designed the Catalyst 9000 for use in the modern digital era. With the 9000 models, users can connect to virtually any network, including cloud, mobile and wireless. These switches range in size and power with models designed to fit any need.

As technology improves, the right tools for connecting must advance with it. Companies looking for fast and secure connections are often encouraged to upgrade to Cisco Catalyst 9000, but it is not necessarily required. Business is about the bottom line. Companies must find the right balance between cost and performance when improving a system.

What Is Cisco Catalyst 9000?

The Cisco Catalyst family of switches offers solutions for businesses and organizations of any size. Demands on networks are growing each year. The right equipment will be necessary to process incoming data, maintain security and ensure connectivity. Cisco 9000 switches include:

  • Security: Cisco uses enhanced-threat-analysis (ETA) to identify cyber threats. Advanced analytics identify risks to your network, including threats hidden in encrypted traffic. Additionally, 9000-series switches allow users to host private cloud networks, limiting exposure to hackers.
  • Connectivity: The 9000 is for the internet of today and the network of tomorrow. Cisco designed the 9000 with cloud and mobile connectivity as a top priority with the highest UPoE available. These switches are created to withstand data usage predictions well into the future.
  • Access: Improve both yours and the user experience with constant connectivity. Cisco 9000 switches are always on, ensuring the latest updates are consistently applied and users enjoy constant connectivity.
  • Programmability: Cisco Catalyst 9000 switches include built-in programmable UADP ASIC. Switches also come equipped with x86 CPU with OPEN IOS-XE installed. Simplify operations with blue beacon technology and built-in RFID capabilities.
  • Design: Discover, configure and provision devices across your network in a fraction of the time. Cisco DNA and SD-Acess allow for better network management with reduced risk to vulnerabilities. Automate more tasks for greater efficiency with less room for error.

Pros and Cons of Cisco Catalyst 9000 Switches

If you’re thinking of upgrading your network capabilities, you have a lot of options. Cisco 9000-series switches do offer some performance benefits, however, there are other options are available. These other solutions can offer similar performance at a much lower cost. Business resources, network demands and future planning can all affect your decision to act.

Before moving to a Catalyst switch, it’s essential to explore both the advantages and disadvantages of making the change. Here’s a quick breakdown of the pros and cons of Cisco Catalyst 9000 switches:


  • Density: Up to twice the density of comparable switches
  • Safety: Better security features for cyber threat reduction
  • Analytics: Intelligent analytics for better network management
  • Support: Ongoing training and technical assistance from Cisco


  • Licensing: Subscription-based software licensing means you are limited
  • Price: Catalyst 9000 switch pricing often exceeds business resources
  • Vendor Lock-In: Reduces your flexibility and increases long-term costs

Though there are some benefits of moving to the latest option from Cisco, there are disadvantages that could affect you if you were to choose that option. Fortunately for your organization, there are Cisco Catalyst 9000 alternatives that can provide comparable performance without the drawbacks.

What Is Your Alternative?

Worldwide Services has switching solutions to fit your needs and your budget. Finding the right IT solutions for your organization starts with a better understanding of your network. We will perform a complete walkthrough of your system to identify areas for improvement, helping your business maintain a competitive edge.

At Worldwide Services, we work with businesses and organizations of all sizes. Get more information on Cisco switches, comparable products, or how we can help optimize your network. Contact Worldwide Services today at 855-894-6400 or send us a message online.

read more

Leading Technology Brands Supported