Data Center Generators

Data Center Generators

Contact PTS

Generators are a key to data center reliability. Supplementing a battery-based uninterruptible power supply (UPS) with an emergency generator should be considered by all data center operators. The question has become increasing important as super storms such as Hurricane Sandy in the Northeast United States knocked out utility power stations and caused many downed power lines, resulting in days and weeks of utility power loss.
Data Center Generator Delivery

Beyond disaster protection, the role of a backup generator to provide power is important when utility providers consider summer rolling blackouts and brownouts and data center operators see reduced utility service reliability. In a rolling blackout, power to industrial facilities is often shut down first. New data center managers should check the utilities contract to see if a data center is subject to such utility disconnects.

Studies show generators played a role in between 45 and 65 percent of outages in data centers with an N+1 configuration (with one spare backup generator). According to Steve Fairfax, President of MTechnology, “Generators are the most critical systems in the data center.” Mr. Fairfax was the keynote speaker at the 2011 7×24 Exchange Fall Conference in Phoenix, Arizona.

What Should You Consider Before Generator Deployment?

  • MTU-Onsite-Energy-Data-Center-Gas-Generators
    MTU Onsite Energy Gas Generator

    Generator Classification / Type. A data center design engineer and the client should determine if the generator will be classified as an Optional Standby power source for the data center, a Code Required Standby power source for the data center, or an Emergency back-up generator that also provides standby power to the data center.

  • Generator Size. When sizing a generator it is critical to consider the total current IT power load as well as expected growth of that IT load. Consideration must also be made for facility supporting infrastructure (i.e. UPS load) requirements. The generator should be sized by an engineer, and specialized sizing software should be utilized.
  • Fuel Type. The most common types of generators are diesel and gas. There are pros and cons to both as diesel fuel deliveries can become an issue during a natural disaster and gas line feeds can be impacted by natural disasters. Making the right choice for your data center generator depends on several factors. The fuel type needs to be determined based upon local environmental issues, (i.e. Long Island primarily uses natural gas to protect the water aquifer under the island), availability, and the required size of the standby/emergency generator.
  • Deployment Location. Where will the generator be installed? Is it an interior installation or an exterior installation? An exterior installation requires the addition of an enclosure. The enclosure may be just a weather-proof type, or local building codes may require a sound attenuated enclosure. An interior installation will usually require some form of vibration isolation and sound attenuation between the generator and the building structure.
  • Cummins-Lean-Burn-Industrial-Gas-Generators
    Cummins Lean-Burn Gas Generator

    Exhaust and Emissions Requirements. Today, most generator installations must meet the new Tier 4 exhaust emissions standards. This may depend upon the location of the installation (i.e. city, suburban, or out in the country).

  • Required Run-time. The run-time for the generator system needs to be determined so the fuel source can be sized (i.e. the volume of diesel or the natural gas delivery capacity to satisfy run time requirements).

What Should You Consider During Generator Deployment?

  • Commissioning The commissioning of the generator system is basically the load testing of the installation plus the documentation trail for the selection of the equipment, the shop drawing approval process, the shipping documentation, receiving and rigging the equipment into place. This process also should include the construction documents for the installation project.
  • Generac-industrial-gas-generators
    Generac Generator

    Load Testing Typically, a generator system is required to run at full load for at least four (4) hours. It will also be required to demonstrate that it can handle step load changes from 25% of its rated kilowatt capacity to 100% of its rated kilowatt capacity. If the load test can be performed with a non-linear load bank that has a power factor that matches the specification of the generator(s) that is the best way to load test. Typically, a non-linear load bank with a power factor between 75% and 85% is utilized.

  • Servicing The generator(s) should be serviced after the load test and commissioning is completed, prior to release for use.

What Should You Consider After Generator Deployment?

  • Caterpillar Industrial Diesel GeneratorsService Agreement. The generator owner should have a service agreement with the local generator manufacturer’s representative.
  • Preventative Maintenance. Preventative Maintenance should be performed at least twice a year. Most generator owners who envision their generator installation as being critical to their business execute a quarterly maintenance program.
  • Monitoring. A building monitoring system should be employed to provide immediate alerts if the generator and ATS systems suffer a failure, or become active because the normal power source has failed. The normal power source is typically from the electric utility company, but it could be an internal feeder breaker inside the facility that has opened and caused an ATS to start the generator(s) in an effort to provide standby power.
  • Regular Testing. The generator should be tested weekly for proper starting, and it should be load tested monthly or quarterly to determine that it will carry the critical load plus the required standby load and any emergency loads that it is intended to support.
  • bloom-energy-server
    The Bloom Box by Bloom Energy

    Maintenance. The generator manufacturer or third party maintenance organization will notify the generator owner when important maintenance milestones are reached such as minor rebuilds and major overhauls. The run hours generally determine when these milestones are reached, but other factors related to the operational characteristics of the generator(s) also apply to determining what needs to be done and when it needs to be done.

 PTS Data Center Solutions provides generator sets for power ratings from 150 kW to 2 MW. We can develop the necessary calculations to properly size your requirement and help you with generator selection, procurement, site preparation, rigging, commissioning, and regular maintenance of your generator.

To learn more about PTS recommended data center generators, contact us or visit (in alphabetical order):

To learn more about PTS Data Center Solutions available to support your Data Center Electrical Equipment & Systems needs, contact us or visit:

Zerto Hypervisor Replication

btn_contact btn_video

Hypervisor Replication

Introducing Hypervisor-based Replication. Zerto offers a virtual-aware, software-only, tier-one, enterprise-class replication solution purpose-built for virtual environments. By moving replication up the stack from the storage layer into the hypervisor, Zerto created the first and only replication solution that delivers enterprise-class, virtual replication and BC/DR capabilities for the data center and the cloud.


Zerto architecture: Simple, effective, and virtual-ready

At the heart of this patent-pending replication technology are two key components:

Zerto Virtual Manager (ZVM) – The ZVM plugs directly into the virtual management console (such as VMware’s VCenter), enabling visibility into the entire infrastructure. The ZVM is the nerve center of the solution, managing replication for the entire VSphere domain, keeping track of applications and information in motion in real time.

Virtual Replication Appliance (VRA) – The VRA is a software module that is automatically deployed on the physical hosts. The VRA continuously replicates data from user-selected virtual machines, compressing and sending that data to the remote site over WAN links.

Key Differences with Hypervisor-based Replication:

Because it is installed directly inside the virtual infrastructure (as opposed to on individual machines), the VRA is able to tap into a virtual machine’s IO stream. Therefore, each time the virtual machine writes to its virtual disks, the write command is captured, cloned, and sent to the recovery site.

Unlike some replication technologies that primarily offer data protection along with cumbersome snapshots and backup paradigms, Zerto’s solution provides continuous replication with zero impact on application performance.

Hypervisor-based replication is fully agnostic to storage source and destination, natively supporting all storage platforms and the full breadth of capabilities made possible by virtualization, including high availability, clustering, and the ability to locate and replicate volumes in motion.

Hypervisor Advantages

Hypervisor-based replication offers many important benefits:

  • Granularity: the ability to replicate at the correct level of any virtual entity, be it a single virtual machine, a group of virtual machines, or a virtual application (such as VMware VApp).
  • Scalability: Zerto’s hypervisor-based replication solution is software-based so it can be deployed and managed easily, no matter how fast or large the infrastructure expands. The solution also enables administrators to perform operations and configure policies at the level of the virtual machines or applications.
  • Ease of management: With no guest-host requirements or additional hardware footprint, Zerto’s solution is easy to manage. It simply resides in the hypervisor, enabling centralized management from the virtual management console (such as VMware VCenter).
  • Server and storage motion: Only hypervisor-based replication allows you to quickly move virtual machines around from one physical server or array logical unit (data store) to another, while maintaining full replication.
  • Hardware-agnostic: Hypervisor-based replication is hardware-agnostic, supporting all storage arrays including Storage Area Network (SAN) and Network-Attached Storage (NAS), and virtual disk types such as Raw Device Mapping (RDM) and VMware File System (VMFS).
  • RPO and RTO: Zerto’s hypervisor-based replication solution achieves RPO in seconds, and RTO in minutes.

Other Zerto Solutions:

To learn more about PTS consulting services to support Data Center Consulting, Data Center Management, and Enterprise IT Consulting, contact us or visit:

Zerto Virtual Replication for Microsoft Hyper-V

btn_contact btn_video

Zerto Virtual Replication for Microsoft Hyper-V: Data Replication, Backup, Recovery, & Failover

Zerto Virtual Replication, the industry’s first hypervisor-based replication solution for VMware environments, is now available for Microsoft Hyper-V. Purpose-built for production workloads and deployed on a virtual infrastructure, Zerto Virtual Replication is the only technology that combines near-continuous replication with block-level, application-consistent data protection across hosts and storage.


Zerto Virtual Replication is the first hypervisor-based replication between different hypervisors. With the revolutionary capability to automatically convert Hyper-V VMs to VMware and VMware VMs to Hyper-V on the fly, Zerto allows replication from remote office Hyper-V instances into a central VMware datacenter, seamless migrations of workloads between hypervisors or even the ability to replicate VMs from a primary VMware datacenter to a secondary Hyper-V datacenter for disaster recovery.

Zerto Virtual Replication offers an RPO (Recovery Point Objective) of seconds and consistent point in time recovery for multiple-VM, production applications. Some additional features of Zerto Virtual Replication for Microsoft Hyper-V include:

  • Storage agnostic replication – any storage can be used at the source and target site
  • Hypervisor agnostic replication – Hyper-V to Hyper-V, vSphere to Hyper-V and Hyper-V to vSphere
  • No snapshots – Zerto Virtual Replication uses continuous data protection to minimize impact on the production environment
  • Automated orchestration of disaster recovery processes for application consistency
  • Simple management interface for KPIs and reporting

Other Zerto Solutions:

To learn more about PTS consulting services to support Data Center Consulting, Data Center Management, and Enterprise IT Consulting, contact us or visit:

Zerto Replication for VMware

btn_contact btn_video

Simple VMware Replication: Business Continuity and Disaster Recovery Just Got Simple!

(Click image to view full-size)

Zerto’s replication for VMware enables automated data recovery, failover and failback and lets you select any VM in VMware’s vCenter. It’s that simple – no storage configuration necessary, no agent installation on guest required.

VMWare Replication without Storage Constraints

Zerto’s VMWare Replication protects at the VM and VMDK level, which means that protection decisions can be made without any storage constraints.

  • Replicate single or multiple VMs
  • Use an intuitive GUI embedded in VMware’s vCenter
  • Protect multiple virtual disks connected to the same VM
  • Recover immediately with the click of a button
  • Support replication of RDM devices
  • View and manage local and remote sites from the same vCenter
  • Deliver consistent protection across your company with the ability to support multiple sites

Enterprise Class Replication for VMware

Zerto designed Enterprise-Class replication for VMware to replace array-based replication. It provides the flexibility, scalability and ease of use expected in the virtual world without compromising any of the features and functionality required for protecting mission-critical production applications.

  • Achieve seconds of RPO with continuous replication
  • Recover applications instantly and automatically, providing RTO of minutes
  • Protect thousands of VMs with scale-out replication architecture
  • Reduce application performance impact to zero by eliminating snapshots
  • Recover from logical or operator errors with journal-based, point-in-time recovery
  • The size of the journal automatically increases and decreases based on the amount of changed data, ensuring protection even when applications generate large bursts of data.
  • Monitor and report with built-in reporting and performance UI
  • Centralized management simplifies tasks, ensuring application availability
  • Remote Office / Branch Office (ROBO) protection extends disaster recovery to branch offices or environments managed by a single vCenter
  • Recovery validation for live failover
  • Increased manageability of alerts and events with a new centralized tab for better cause/effect analysis
  • Expanded REST API capabilities for orchestration, events and alerts
  • Improved environment maintenance using new VRA tabs and workflows

Multi-tier Application Protection & Recovery

Protect applications, not LUNs! With Zerto’s Virtual Protection Groups (VPGs), you can group VMs and virtual disks to be protected and recovered together. This provides complete consistency, regardless of physical location on servers and storage.

  • Parallel recovery accelerates large application availability
  • Group VMs and virtual disks into VPGs
  • Maintain all properties of VPGs, VMs and vApps
  • Test recovery, failover and failback of entire VPGs
  • Support VMotion, storage VMotion, DRS and HA while replicating
  • Multi-site replication, protection and migration, enabling the protection of data at branch offices
  • Offsite cloning and backup – create a copy of VMs on the replication site for testing, backup or development, with no impact on the production environment
  • Get application protection policy and QoS
  • Leverage built-in support for VSS and VMware vApp objects

Complete BC/DR: Replication, Testing, Recovery, CDP, Migration, Reporting

Zerto eliminates the need to use multiple solutions for your business continuity and disaster recovery needs. All required workflows and functionality are built-in with easy-to-follow, scalable wizards, empowering you to:

  • Recovery reports document execution of BC/DR processes, for easy auditing and reporting
  • ‘test-before-you-commit’ function allows test of a specific failover point before committing it, enabling 100% assurance that failover will be successful
  • Failover one or more VPGs through automated failback
  • Recover to a historic point-in-time with journal-based CDP
  • Recover applications instantly, in full read-write mode
  • Test failover, including full remote recovery in a sandboxed zone
  • Migrate workloads to a remote datacenter
  • Monitor with Recovery Time Objective (RTO) reports documenting results of BC/DR tests
  • NEW with ZVR 3.5 Enhanced Alert Handling (Include applicable problem and resolution information

NEW! Offsite Backup

Zerto Virtual Replication 3.5 introduces a new paradigm in data protection, Zerto Offsite Backup. This new feature increases the usefulness of data that is replicated for the purposes of business continuity and disaster recovery (BC/DR). A duplicate backup infrastructure is no longer required for backup, significantly reducing the overhead on the production environment. Zerto Virtual Replication extends the journal, which offers continuous data protection for up to 5 days worth of data, retaining daily, weekly, and monthly copies of data.

  • Automate retention management, ensuring that the following is available at all times per Virtual Protection Group (VPG): Daily copies for the last week, Weekly copies for the last month, Monthly copies for the last year
  • Configure backup start times and target repository
  • Ensure backup or disaster recovery operations are executed as needed
  • Review backups and retention times to ensure business requirements are met
  • Configure reports to be sent on a daily or weekly basis to provide complete backup information

vCloud Director Integration

  • vCloud 5.1 seamless integration including native support for vApps, Storage Profiles, Org Networks, Provider VDC and more
  • Easily replicates between vSphere and vCloud environments

Other Zerto Solutions:

To learn more about PTS consulting services to support Data Center Consulting, Data Center Management, and Enterprise IT Consulting, contact us or visit:

Zerto Disaster Recovery for a Virtualized World

btn_contact btn_video

Is Your Data Replication Solution Aligned with Virtualization?

Zerto’s award winning hypervisor-based replication software enables alignment of your Business Continuity & Disaster Recovery (BCDR) plan with your IT strategy. By using hypervisor-based data replication, you can reduce DR complexity and hardware costs and still protect your mission-critical virtualized applications.

Disaster Recovery

Virtualization and cloud benefits include increased flexibility, better asset utilization and reduced operational and maintenance costs. However, legacy business continuity solutions for virtualized mission-critical applications impede these benefits as they are complex to manage, do not support advanced virtualization features and require additional hardware and software resources. All the benefits that are expected when the application is virtualized – lower costs, simplified management – are lost. As a result, your IT strategy is not aligned – you need to juggle a virtualized production environment with a physical replication solution.

If you are force-fitting a physical replication solution for DR of virtualized applications, you’re missing out on the key benefits of virtual-aware disaster recovery:

Benefit 1: Reduce Hardware Costs

Zerto is software-only and hardware-agnostic – so you can replicate from any type of storage to any type of storage.

  • Leverage existing, underutilized storage as the replication target.
  • Leverage lower-cost storage at the replication site.

Benefit 2: Reduce Complexity and Streamline IT Operations

Zerto dramatically reduces your day-to-day management complexity and streamlines your operations, eliminating the complexity typically associated with managing BC/ DR software solutions. Many of the manual tasks associated with a typical DR solution are automated with Zerto, reducing or eliminating errors.

  • Select a VM in VMware vSphere vCenter and, with a few clicks, it is replicated and protected.
  • Zerto fully automates the entire failover and failback process, including creation of all the VMs, reconfiguring IP addresses and executing custom scripts.
  • Absolutely NO snapshots. Zerto uses continuous replication and does not utilize snapshots which can slow down your production environment.
  • With Zerto’s unique ‘Virtual Protection Group’ concept, you can protect an entire application spanning multiple VMs with a single policy and a single automated failover and failback process.
  • Zerto seamlessly supports VMware VMotion, DRS and HA, plus Storage VMotion and DRS. With Zerto, you are able to fully realize the benefits of VMware and ensure your data protection strategy is maintained exactly as you have specified.
  • Zerto’s software-automated installation does not require any changes to your storage or server infrastructure, making deployment simple and straightforward. The entire install process is completed in minutes.

Benefit 3: Powerful BC/DR for Mission-Critical Applications

Protecting mission-critical applications is not only about copying data. It is about ensuring application recovery, protecting from logical errors and frequently testing failover to ensure you can recover when needed.

  • Protect entire applications by selecting the VMs that are part of the application and let Zerto do the rest.
  • Ensure consistent application protection and recovery with Zerto’s Virtual Protection Group, with or without VSS support.
  • Test failover with the click of a button, while maintaining protection of your production application.
  • Recover from logical errors with ‘point-in-time’ recovery, enabling you to failover to a previous point in time.

Data Replication Software for Virtualized IT Environments from Zerto

Zerto is the first hypervisor-based, virtual-aware, data replication software solution specifically designed for tier-one applications deployed on virtual infrastructure. It replaces traditional array-based replication and business continuity / disaster recovery solutions that are not built to deal with the virtual IT paradigm.

Other Zerto Solutions:

To learn more about PTS consulting services to support Data Center Consulting, Data Center Management, and Enterprise IT Consulting, contact us or visit:

SimpliVity OmniCube Hyperconverged Infrastructure Solution

SimpliVity OmniCube Hyperconverged Infrastructure Solution

btn_contact btn_download btn_video

SimpliVity OmniCube

You’ve grown accustomed to a data center with best-of-breed compute, storage and networking – and all of the resiliency features that maximize uptime and high service levels. However, now that you’ve virtualized a large percentage of workloads and experienced new levels of efficiency, you may want to emulate new cloud architectures. Meet OmniCube: hyperconverged infrastructure that delivers the economies of scale of a cloud computing model while still ensuring enterprise IT performance and resiliency for virtual workloads.

OmniCube stands apart. It’s not only a hyperconverged infrastructure platform. OmniCube has a data architecture that uniquely addresses data efficiency and global management requirements required in virtualized and cloud computing environments. OmniCube’s single unified stack runs on hyperconverged x86 building blocks, simplifying infrastructure. OmniCube delivers infrastructure at a fraction of cost and with extreme reduction in complexity. Deploying a network of two or more OmniCubes creates a global federation that facilitates efficient data movement, resource sharing and scalability.

PTS and SimpliVity offer Free eBook: Hyperconvergence Infrastructure for DummiesHyperconverged-Infrastructure-for-Dummies-Cover-150

From PTS and partner SimpliVity, download a free copy of Hyperconverged Infrastructure for Dummies (eBook in PDF format) by Scott D. Lowe

  • Use hyperconverged infrastructure to simplify IT & reduce total cost of ownership (TCO)
  • Achieve economics & agility of large scale web companies in your data center
  • Solve capacity, performance, data protection, mobility & management challenges

Hyperconvergence is the ultimate in an overall trend of convergence that has hit the market in recent years. Convergence is intended to bring simplicity to increasingly complex data centers. At the highest level, hyperconvergence is a way to enable cloud-like economics and scale without compromising the performance, reliability, and availability you expect in your own data center.

SimpliVity OmniCube Models

SimpliVity offers a range of models within the OmniCube portfolio, each one optimized for differing environments and use cases:

  • CN2000 Series is a compact system designed to run remote office or smaller workloads. It offers 8 to 24 CPU cores and 5 to 10TB of usable storage capacity.
  • CN3000 is a general-purpose modular system designed to run all data center applications. It offers 12 to 24 CPU cores and 20 to 40TB of usable storage capacity.
  • CN5000 is a high-performance system optimized to run high-performance applications. It offers 24 CPU cores and 15 to 30TB of usable storage capacity.


SimpliVity OmniCube Differentiation

What does the landscape of converged infrastructure vendors look like? Good, better and best. The earliest products to market consisted of legacy products delivered as a pre-racked and pre-cabled system based on a reference architecture. They are still separate storage, compute and networking products to manage and support. Building block appliances based on commodity converged hardware improved deployment and acquisition cost. They easily scale by adding modular appliances. What’s missing, however, is the data architecture to address performance, capacity, mobility, protection and management issues.

OmniCube is a single consumable IT system. It’s a unified software infrastructure stack on commodity x86 hardware in a single shared resource pool. OmniCube simplifies management, resolves interoperability issues, and reduces costs when compared with the legacy stack. OmniCube’s innovation is in its Data Virtualization Platform, which delivers performance efficiency (by eliminating redundant reads/writes) and capacity efficiency (by reducing the storage “footprint” of data). OmniCube also achieves VM-centricity by tracking what data belongs to which virtual machine, and enabling VM-mobility.

SimpliVity OmniCube Data Virtualization Platform

OmniCube’s data virtualization platform improves performance, capacity and mobility as it:

  • reduces IOPS requirements by driving efficiency at data’s point of origin and radically reducing read/write IO.
  • reduces capacity requirements by deduplicating, compressing and optimizing data at inception, in real time, once and forever across the lifecycle.
  • tracks what data belongs to which virtual machine, enabling efficient virtual machine mobility.
  • federates management within and across sites, reducing complexity and cost of management.
  • integrates data protection, eliminating the need for traditional backup software and hardware.

OmniCube Features:

  • Global Deduplication
  • Hardware Acceleration
  • Global Federated Management
  • Hyperconvergence
  • Integrated Data Protection

OmniStack Integrated Solution with Cisco UCS

Data center virtualization, standardization, consolidation and automation. These are no small tasks on your way to a software-defined data center. That’s why SimpliVity is combining with Cisco to deliver OmniStack Integrated Solution with Cisco Unified Computing System, a SimpliVity-enabled hyperconverged infrastructure solution.

SimpliVity’s OmniStack technology—including OmniStack software and its revolutionary Accelerator Card—allows you to achieve unprecedented hyperconvergence and enable the software-defined data center you’ve been seeking. Cisco UCS C-Series Rack-Mount servers deliver unified computing in an industry-standard form factor. By leveraging OmniStack running on Cisco UCS, you can achieve 3x TCO savings based on acquisition cost of IT infrastructure, cost of labor, space, and power.

OmniStack with Cisco UCS: Solution Overview

The solution consists of SimpliVity OmniStack software-defined infrastructure that combines eight to twelve core data center functions on Cisco x86 building blocks to deliver hyperconverged infrastructure. Clustering multiple OmniStack- powered hyperconverged infrastructure units forms a shared resource pool and delivers high availability and efficient scaling of performance and capacity. Instances of OmniStack-powered hyperconverged infrastructure within and across data centers form a federation that can be globally managed from a single interface via vCenter.

OmniStack Integrated Solution with Cisco UCS empowers you with the following:

  • You can set up global VM-centric policies, manage workloads in the federation and automate common tasks. This allows you to be more agile and efficient, and reduces your operational costs. OmniStack’s VM-centric approach shifts policies from underlying hardware components to workloads, enabling greater mobility and agility.
  • You gain integrated data protection with VM-level backup policies and WAN-optimized replication to a second site or the cloud for disaster recovery and cost effective, off-site backup.
  • OmniStack data virtualization platform manages data segments in 4K to 8K chunks and deduplicates redundant segments before data hits disk, eliminating unnecessary read/write IO. Deduplicating, compressing and optimizing data in real time saves IOPS and capacity on primary storage and throughout the workload’s lifecycle.


Other SimpliVity Resources:

To learn more about PTS consulting services to support Data Center Consulting, Data Center Management, and Enterprise IT Consulting, contact us or visit:

lt_arrow Back to SimpliVity Manufacturer Page

SimpliVity Hyperconvergence Platform

btn_contact btn_download

Simplivity-logoModern IT Infrastructure Challenges

A snapshot of today’s IT environment is one of complexity, cost, and inflexibility that inhibit IT staff from effectively supporting the business. Several challenges include:

  • Inability to innovate. An estimated 70% of the time, IT employees are just “keeping the lights on” by conducting maintenance, upgrades, patches, etc. For only 30% of the time, they are building new innovation or engaging in new projects that will push the business forward.
  • Complexity and decreasing employee productivity. The typical data center faces the complex challenges of assimilating many different IT stacks, including primary storage, servers, backup deduplication appliances, WAN optimization appliances, SSD acceleration arrays, public cloud gateways, backup applications, replication applications, and other special purpose appliances and software applications. IT staff must somehow cobble them together, but it inevitably results in poor utilization, idle resources, and high labor costs. This is typical in data center consolidation.
  • Multiple points of management. Many modern infrastructures require dedicated staff with specialized training to manage the interface of each stand-alone appliance. The lack of staff and challenge of multiple points of management is typical in remote offices.
  • Limited data mobility. As organizations move towards virtualization, they desire virtual machine mobility benefits. VMware vMotion shifts virtual machines from server to server or data center to data center. However, in today’s IT infrastructure, the data associated with the virtual machine is still limited in its mobility. This is evident in data migration scenarios.
  • Inflexible scalability. Predicting infrastructure requirements three years into the future is not practical or efficient. Data center managers need a solution that can scale out with growing demand and without increasing complexity. Similarly, the ability to quickly scale down infrastructure or rebalance workloads is time-consuming and difficult. Test and development situations illustrate this issue.
  • Poor IT and business agility. The complexities of legacy infrastructure place a burden on IT teams in day to day management. The inherent inflexible nature of these technologies also burdens IT teams, and therefore the business. For example, your ability to quickly roll out new applications or build new capabilities that the business requires. More technically, there are also restrictions on legacy infrastructure’s ability to restore, replicate, and clone data both locally and to remote data centers in an efficient manner at scale. This introduces economical limitations in terms of sought data management and protection practices. An example of this scenario is tier-1 applications.
  • Cost. Highly functional and high performance data storage is dependent on an expensive (CAPEX and OPEX) stack of technologies, including storage area network (SAN); Network Attached Storage (NAS); target backup devices; WAN optimization appliances; and traditional standalone servers. An example of this use case is VDI.

The Path to Hyperconvergence

The first step on the path to hyperconvergence, beginning in 2009, was the development of Integrated Systems and reference architectures. Consortiums formed around existing legacy technologies, with the goal of pre-integrated, pre-tested and pre-validated solutions combining servers, storage and networking to speed deployment for end users. The idea was good, however, the promise did not necessarily match reality. Purchase order-to-deployment takes months, and the integration work between disparate technologies from different vendors may cause delays. Moreover, this first step towards convergence only included the compute, storage, and networking, but did not address any of the other technologies needed to provide full while minimizing redundancy services while reducing OPEX and CAPEX. Customers still required: replication software, separate disaster recovery appliances, backup software, backup appliances, WAN optimization appliances, cloud gateways, flash caches, and more.

The next step on the path to hyperconvergence was the delivery of “partial convergence.” Vendors entered the market with incremental innovation by creating a shared resource pool of previously disparate compute, network, and storage resources. They effectively created virtual SAN appliances (VSA) and packaged them inside of the server running a hypervisor. Partial convergence decreases deployment times, but left something out: Partial convergence fails to solve the data problem. They did not eliminate redundancy of data or the redundancy of IO. They did not provide integrated data protection or a globally federated architecture of VM-centricity and mobility.


Hyperconvergence Defined

SimpliVity OmniCube

According to Taneja Group: “We believe hyperconvergence occurs when you fundamentally architect a new product with the following requirements:

  • A genuine integration of compute, networking, storage, server virtualization, primary storage data deduplication, compression, WAN optimization, storage virtualization, and data protection. No need for separate appliances for disk-based data protection, WAN optimization, or backup software. Full pooling and sharing of all resources. A true data center building block. Just stack the blocks and they reform into a larger pool of complete data center infrastructure.
  • No need for separate acceleration or optimization solutions to be layered on or patched in. Performance (auto-tiering, caching and capacity optimizations is all built in). As such, no need for separate flash arrays or flash caching software.
  • Scale-out to web scale, locally and globally, with the system presenting one image. Manageable from one or more locations, delivering radical improvement in deployment and management time due to automation.
  • VM centricity, i.e. full visibility and manageability at VM level. No LUNS or volumes or other low level storage constructs.
  • Policy-based data protection and resource allocation at a VM level.
  • Built-in cloud gateway, allowing the cloud to become a genuine, integrated tier for storage or compute, or both.”

Hyperconvergence is the ultimate in an overall trend of convergence that has hit the market in recent years. Convergence is intended to bring simplicity to increasingly complex data centers. At the highest level, hyperconvergence is a way to enable cloud-like economics and scale without compromising the performance, reliability, and availability you expect in your own data center.

SimpliVity’s OmniCube is the market’s only hyperconvergence platform, providing a combination of a single shared resource pool, data virtualization platform and a globally federated architecture.

Hyperconverged-Infrastructure-for-Dummies-Cover-150PTS and SimpliVity offer Free eBook: Hyperconvergence Infrastructure for Dummies

From PTS and partner SimpliVity, download a free copy of Hyperconverged Infrastructure for Dummies (eBook in PDF format) by Scott D. Lowe

  • Use hyperconverged infrastructure to simplify IT & reduce total cost of ownership (TCO)
  • Achieve economics & agility of large scale web companies in your data center
  • Solve capacity, performance, data protection, mobility & management challenges

Hyperconvergence is the ultimate in an overall trend of convergence that has hit the market in recent years. Convergence is intended to bring simplicity to increasingly complex data centers. At the highest level, hyperconvergence is a way to enable cloud-like economics and scale without compromising the performance, reliability, and availability you expect in your own data center.

Other SimpliVity Resources:

To learn more about PTS consulting services to support Data Center Consulting, Data Center Management, and Enterprise IT Consulting, contact us or visit:

lt_arrow Back to SimpliVity Manufacturer Page

Quorum onQ Archive Vault for Backup and Disaster Recovery

btn_contact btn_video

Quorum onQ Archive Vault provides customers with the ability to create long term archive backup of their critical data. The Archive Vault solution assists organizations with compliance needs as well as increasing operational efficiency, mitigating risks and enables litigation readiness. The Archive Vault solution is an extension to our award winning and patented onQ™ High Availability solution that provides everything you need for immediate One-Click Recovery™ of all your critical systems after any storage, system or site failure. The onQ solution maintains up-to-date ready-to-run virtual machine clones of your critical systems that can run right on the appliance, transparently taking over for failed servers within minutes.

Quorum onQ Archive Vault for Backup & Disaster Recovery (BDR)
Quorum onQ Archive Vault for Backup & Disaster Recovery (BDR)

Comprehensive Solution

As an add-on to onQ, customers benefit from onQ’s High Availability and Disaster Recovery capabilities as well as long term backup and retention.


A single instance of the Quorum onQ Archive Vault can be extended to more than 200 TBs across 8 disks storage modules. It is possible to have multiple instances of the Archive Vault, thus allowing unlimited growth potential.

Ease of Use

Quorum onQ Archive Vault appliance can seamlessly pull backups from onQ. Archive Vault can be configured locally and/or remotely to create a repository that is off-site for reliable disaster recovery capabilities.

Major Features

  • Virtually unlimited growth potential
  • Leverages High Availability (HA) & Disaster Recovery (DR) compute resources, lowering cost & U space
  • Multiple HA & DR tenants/appliances can access a single Archive Vault
  • Retains customer data de-duplicated in the archive repository
  • Selectable level of redundancy, HA only, DR only, HA & DR
  • Archive repository data encryption option
Hardware Features

  • 2U 12-drive add-on storage module to the onQ HA and/or DR appliance
  • Supports hardware alerting
  • Supports “spin-down” of unused disk drives to conserve energy
  • Supports hot-swap disk drives and RAID-6 configuration for extra reliability
  • Utilizes a NV-CACHE battery-backed disk controller to protect data during unexpected power outages
  • Utilizes redundant power supplies
Scheduling Features

  • Allows for scheduling of archive backups on monthly, quarterly, and year schedules, with appropriate expiration
  • Maximum yearly retention is unlimited
  • Daily and weekly backups will reside in the existing onQ repository

To learn more about PTS consulting services to support Data Center Consulting, Data Center Management, and Enterprise IT Consulting, contact us or visit:

Quorum onQ Appliance for Backup and Disaster Recovery

btn_contact btn_video

Quorum® onQ™ appliance was engineered from the ground up to provide enterprise level Disaster Recovery Solution without the complexity or cost of traditional solutions. onQ™ One-Click Recovery™ puts all of your applications and data back online within minutes of a storage, system or an entire site failure. Deploy onQ™ to meet your needs—locally for High Availability (HA), remotely for Disaster Recovery or both as well as Hybrid Cloud Disaster Recovery Solution. onQ™ appliances are offered in a range of sizes to match your requirements and also as a service via a hybrid cloud solution.

Quorum onQ Appliance for Backup & Disaster Recovery (BDR)
Easy Continuity & Comfort


  • One-Click Recovery™ system
  • One-Click Testing
  • Replicates anywhere—compressed & encrypted
  • Protection for storage, system & site failures
  • Integrated server monitoring
  • Email and text alerts
  • Scheduled email health reports

Data Protection & Recovery

  • Full local backup of entire system
  • Supports physical and virtual servers
  • Sub file-level incremental updates
  • Global deduplication at source
  • Powerful exclusion rules
  • Restore to any snapshot level
  • File-level recovery to any snapshot level
  • Windows Share Restore

Flexible Deployment

  • Offsite or in the cloud
  • Bandwidth throttling
  • Internal and External storage
  • No GB or monthly storage charges
  • Supports all apps without add-ons
Ultra-efficient Incremental Updates—RPO as low as 15 minutes
  • Incremental Forever: After the initial full image is made at installation, onQ™ only transfers one copy of any changes since the last interval.
  • Just the Changes: We detect not only which files have changed, but which parts of the files have changed so when that multi-gigabyte database has one record change, we don’t copy the entire file.
  • Deduplication: We only send the parts of files that don’t already exist on any other server we’re protecting. That means that if the same “chunk” exists on any server protected by the appliance, it doesn’t need to be sent.
Windows Share Restore
  • Allows browsing and retrieving files directly off of the onQ or Archive Vault repository by mapping a share on to a windows machine.
One-Click Recovery™
  • Current Forever: Each ultra-efficient update is merged into the virtual machine recovery nodes, which are full current images.
  • Ready-to-Run: We don’t wait until you need them to build your virtual recovery nodes, allowing one-click recovery at any time.
  • Point-in-Time Recovery: Even though we merge the changes into the ready-to-run recovery node, you can restore files or an entire system to a prior state.
Server Monitoring
  • We continuously check the health of your servers, sending an alert if any service interruption is detected.
Site-to-Site Replication
  • We compress and encrypt changes and synchronize them with a remote appliance. You get instant remote recovery for complete disaster protection.
Redundant storage
  • Our appliances provide hot-swap protection from any single-disk failure, and with your remote onQ, you’ll have additional protection.
Network Storage Support
  • You can use standard IP network storage to extend or replace our integrated storage.

Quorum onQ™ Specifications

Quorum onQ™ appliances are sized based on how many Recovery Nodes you need and how much data you need to protect. Storage is always expandable, so you’re not locked into a rigid platform. With our incremental-forever and deduplication technologies, the amount of new data that needs to be sent to the appliance per snapshot is usually quite small, minimizing not only storage requirements, but also reducing the load on your LAN and WAN.

  1. Select one appliance to be placed on your LAN for High Availability
  2. Select a second appliance to be synchronized to your own remote site

To learn more about PTS consulting services to support Data Center Consulting, Data Center Management, and Enterprise IT Consulting, contact us or visit:

Quorum Hybrid Cloud Disaster Recovery Solution

btn_contact btn_video

Quorum-hybrid-cloud-disaster-recovery-solution-fullYour critical applications are ready-to-run (or test) at a moment’s notice, while safe and secure in world class SAS-70 certified data centers. The Hybrid Cloud Solution is an extension to our award-winning and patented onQ™ High Availability (HA) appliance solution that provides everything a small-or medium-sized company needs for immediate One-Click Recovery™ of all their critical systems after any storage, system or site failure. The onQ solution maintains up-to-date, ready-to-run virtual machine clones of your critical systems that can run right on the appliance, or in the cloud, transparently taking over for failed servers within minutes.

Unlike a typical backup-only cloud storage solution, the hybrid cloud solution can run servers locally or through the cloud. The onQ recovery time is measured in minutes not days. There is no need to ship new hardware to restore applications. It is done with a click of a button. Your business can be up and running in a matter of minutes in the event of a disaster.

Secure Cloud Solution

All data transferred from the High Availability (HA) to the Disaster Recovery (DR) will go through a 128 bit AES encrypted session behind a 256bit AES VPN tunnel that is connected directly from the HA appliance to the DR Cloud solution. There is a dedicated virtual firewall isolating each individual custom virtual network. All connections to the DR appliance or the DR recovery nodes are via VPN to that firewall.

Automatic Testing

Unlike other cloud data backup solutions, onQ is a current virtual copy of any server – always on and ready-to-run. Additionally with onQ, daily tests are performed automatically on the servers. Because of cost and complexity, these tests are most often not ever performed with traditional back-up cloud solutions.

The data centers that will be hosting your applications are world-class, state of the art, and SAS-70 Type II certified. The data centers have been built with unmatched attention to ensure the highest performance, safest and most stable environment possible.

Physical Cloud Security Features

  • Closed Circuit Television System: 46 indoor cameras and 16 outdoor cameras
  • Sub-floor fire detection protection
  • Biometric and keypad access control
  • Redundant power feeds
  • 24/7 guard service
Performance Features

  • Network SLAs – less than 45 milliseconds latency, less than 0.3% packet loss, and less than 0.5 millisecond jitter
  • Redundant P-NAP, intelligently routes traffic across eleven major internet backbones
  • Latest data center performance design techniques including in-row cooling options
  • Redundant high-speed connections to major internet backbones
  • N+1 AC power redundancy
Other Features

  • 24/7 Support staff proactively monitoring facility performance
  • Green design: Data centers consume fewer fossil fuels and produces reduced emissions
  • Advanced routing protocol

To learn more about PTS consulting services to support Data Center Consulting, Data Center Management, and Enterprise IT Consulting, contact us or visit:

Free Report:
Data Center
Industry Trends
close slider

Download FREE Data Center Industry & IT Infrastructure Trends PDF Now!

PTS Data Center Solutions Company Overview with Industry Trends Data