

Who We Are
USES: To deploy as a means of accomplishing a purpose or achieving a result
USES Integrated Solutions Inc. specializes in developing uniquely scalable compute solutions, focused on ARM SoCs, GPU and alternative or emerging processor technologies. Through in-house mechanical, electrical, and software design and strategic partnerships with Industry leaders, USES develops end-to-end computing solutions, ranging from ground-up custom designs to semi-custom integrations for the most unique workloads and environments.
Custom/Semi-Custom Hardware Design
Custom Edge Management & Software Solutions
Mechanical Design
Project Consultation
Inference Server Platform
Orin NX Inference Server
-
Scales from 8 - 24x 100 TOPS, 1024-core NVIDIA GPU and 32 Tensor Cores
-
Integrated Layer 2 Switching / Layer 3 Routing with 4x 10G SFP+ uplinks, TSN, 1588v2 and SyncE Support
-
8 - 24 2TB NVMe Storage (1 Per Jetson Module)
-
Redundant Power Supply
-
Intergrated Out-of-Band Management with Web GUI, Individual Console Access and Power Control
-
Operating Temperature: 0°C to +50°C (+32°F to +122°F)

AGX Orin Inference Server
-
Scales from 4 - 12x 275 TOPS, 2048-core NVIDIA® Ampere™ GPU and 64 Tensor Cores
-
Integrated Layer 2 Switching / Layer 3 Routing with 4x 10G SFP+ uplinks, TSN, 1588v2 and SyncE Support
-
4 - 12 2TB NVMe Storage (1 Per Jetson Module)
-
Redundant Power Supply
-
Intergrated Out-of-Band Management with Web GUI, Individual Console Access and Power Control
-
Operating Temperature: 0°C to +50°C (+32°F to +122°F)

Building on the Cloud Native support and modern AI stack of the NVIDIA Jetson family, the Inference Servers create a resilient, manageable cluster using container orchestration technologies like Kubernetes. Whether creating Edge micro-clusters or CI pipelines, the Inference Servers provide the foundation to automate your Edge or Jetson strategies at scale.