PICMG – October 2, 2024

Collected at: https://www.iotforall.com/open-modular-server-architectures-drive-intelligence-to-the-edge

Industrial IoT has a Goldilocks problem. There are plenty of off-the-shelf server solutions for small IoT deployments. Meanwhile, large companies can afford custom server designs. But there hasn’t been a solution that’s just right for low-to-mid-volume server deployments—the type most often found in industrial edge IoT. Enter open modular server architectures.

Let’s take a look at how this issue began. This gap in the market appeared for a few reasons:   

  • Rugged edge servers in IIoT have widely varying workloads, from simple storage to intense AI processing. You can’t manage those workloads with a single design, and proprietary servers rarely support heterogeneous architectures.
  • Edge servers have to handle heat very efficiently, while also remaining sealed against particles, liquids, and whatever else the factory floor can throw at them. At the same time, server-class processors draw considerably more power (and generate commensurate heat) that traditional embedded or “edge” processors. The design challenge of threading this temperature needle limits hardware options.
  • To bring AI to the edge, we need high-performance computing modules, including support for GPU architectures. That requirement further limits the options for low-to-mid-volume server deployments.

Of course, we wouldn’t bring up this challenge if we didn’t have a solution to suggest. Here’s the good news: open server-on-module specifications like COM-HPC standardize the design of rugged servers, supplying a feature set that’s ideal for edge computing.

Here’s how COM-HPC and the latest generation of open standards pave the way for industrial IoT at the edge.

How Rugged Edge Servers Benefit From Open Standards

We have previously described a path forward for interoperability and interchangeability in process control systems (PCSs). But what about the rugged servers behind the PCS? 

These should also be interoperable and interchangeable, part of a broader ecosystem of mutually compatible IIoT components. In other words, rugged servers should be designed according to open hardware specifications. That’s the only way to achieve a modular design that supports upgradability, cost-efficiency, and technological innovation. 

“The open standardization model says, ‘Let’s all do the same thing with at least the pieces that aren’t competitive,’” said David DeBari, control systems engineer at ExxonMobil.

“Why do we all have the same electric wall plugs? It’s because the world said ‘This is how we want to do it for today’ so there could be a lot of innovation around electronic devices. Standardization is a positive force.” 

To take DeBari’s example a step further, standardized wall outlets allow product developers to focus on new features and capabilities. They don’t have to waste time figuring out how their devices connect to the electrical grid. 

Something similar can happen for rugged edge servers and other IIoT devices. It should! However, that open market requires a common hardware specification and buy-in from designers and device manufacturers. That buy-in is emerging for the COM-HPC standard, with many developers already incorporating it into product designs. 

But why? What makes COM-HPC—and its smallest form factor, COM-HPC-Mini—a strong specification for rugged edge servers specifically? We’ll cover that next. 

Defining the Ideal Specification for IIoT Edge Servers

The best way to understand the COM-HPC standard is to unpack its name: tt’s a Computer-On-Module (COM) specification for high performance computing (HPC). This standard achieves unprecedented modularity by introducing a double-board architecture. 

The compute module is standardized for high performance computing. However, the carrier board is customizable, ready to support the needs of a specific edge server. (The specification also defines a module connector for high-speed communication between the two boards.) 

Developers can configure the carrier board to fit virtually any need. It supports architecture including the following: 

  • CPU (ARM)
  • CPU (x86)
  • CPU (RISC-V)
  • GPU
  • FPGA

That’s the interoperable part of the equation. For interchangeability—hardware compatibility—COM-HPC supports a wide range of connector protocols, including the following: 

  • USB4/Thunderbolt
  • 25 Gigabit Ethernet
  • PCIe® 5.0
  • PCIe® 6.0

The open nature of the COM-HPC standard extends to compatibility with other leading specifications. For example, COM-HPC’s PCIe compatibility leads to support for CXL 3.1, creating the possibility of interoperable memory deployments.

Additionally, DMTF’s Redfish interoperability standard greatly expands the capabilities of COM-HPC’s management platform specification, COM-HPC Platform Management Interface (PMI). Thanks to Redfish integration, the COM-HPC PMI makes it easy to maintain, monitor, and repair systems built on COM-HPC.

But for all these advantages, there’s still the challenge of ruggedness in any industrial edge device. 

The COM-HPC standard specifies three types of modules: Server, Client, and Mini. They all support rugged design, but the Mini form factor—which contains just one 400-pin connector—is particularly suited to the challenges of rugged mobile applications. It has soldered memory and extremely efficient thermal design, and it’s small enough (stack height of 15mm with thermal relief) to keep server footprints very compact. 

For all its strengths, however, the COM-HPC specification is most helpful when it works in tandem with other open standards from organizations like DMTF. 
From the device to the PCS to the rugged edge servers, IIoT components are most helpful when they’re upgradablelow-cost, and quick to communicate. All three benefits require interoperability and interchangeability among components—and that will take a whole ecosystem of open specifications. In other words, COM-HPC is just the beginning.

Leave a Reply

Your email address will not be published. Required fields are marked *

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments