<?xml version="1.0" encoding="utf-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:og="http://ogp.me/ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:schema="http://schema.org/" xmlns:sioc="http://rdfs.org/sioc/ns#" xmlns:sioct="http://rdfs.org/sioc/types#" xmlns:skos="http://www.w3.org/2004/02/skos/core#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" version="2.0" xml:base="https://www.linuxjournal.com/">
  <channel>
    <title>HPC</title>
    <link>https://www.linuxjournal.com/</link>
    <description/>
    <language>en</language>
    
    <item>
  <title>Data in a Flash, Part II: Using NVMe Drives and Creating an NVMe over Fabrics Network</title>
  <link>https://www.linuxjournal.com/content/data-flash-part-ii-using-nvme-drives-and-creating-nvme-over-fabrics-network</link>
  <description>  &lt;div data-history-node-id="1340246" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/petros-koutoupis" lang="" about="https://www.linuxjournal.com/users/petros-koutoupis" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Petros Koutoupis&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;By design, NVMe drives are intended to provide local access to the
machines they are plugged in to; however, the NVMe over Fabric
specification seeks to address this very limitation by enabling remote
network access to that same device.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
This article puts into practice what you learned in &lt;a href="//www.linuxjournal.com/content/data-flash-part-i-evolution-disk-storage-and-introduction-nvme"&gt;Part I&lt;/a&gt; and shows
how to use NVMe drives in a Linux environment. But, before continuing,
you first need to make sure that your physical (or virtual)
machine is up to date. Once you verify that to be the case,
make sure you're able to see all connected NVMe devices:

&lt;/p&gt;&lt;pre&gt;
&lt;code&gt;
$ cat /proc/partitions |grep -e nvme -e major
major minor  #blocks  name
 259        0 3907018584 nvme2n1
 259        1 3907018584 nvme3n1
 259        2 3907018584 nvme0n1
 259        3 3907018584 nvme1n1
&lt;/code&gt;
&lt;/pre&gt;


&lt;p&gt;
Those devices also will appear in &lt;code&gt;sysfs&lt;/code&gt;:

&lt;/p&gt;&lt;pre&gt;
&lt;code&gt;
$ ls /sys/block/|grep nvme
nvme0n1
nvme1n1
nvme2n1
nvme3n1
&lt;/code&gt;
&lt;/pre&gt;


&lt;p&gt;
If you don't see any connected NVMe devices, make sure the kernel
module is loaded:

&lt;/p&gt;&lt;pre&gt;
&lt;code&gt;
petros@ubu-nvme1:~$ lsmod|grep nvme
nvme                   32768  0
nvme_core              61440  1 nvme
&lt;/code&gt;
&lt;/pre&gt;


&lt;p&gt;
Next, install the drive management utility called
&lt;code&gt;nvme-cli&lt;/code&gt;. This utility is defined and maintained by the very
same
NVM Express committee that defined the NVMe specification. The nvme-cli
source code is hosted on
&lt;a href="https://github.com/linux-nvme/nvme-cli"&gt;GitHub&lt;/a&gt;. Fortunately,
some operating
systems offer this package in their internal repositories.
Installing it on the latest Ubuntu looks something like this:

&lt;/p&gt;&lt;pre&gt;
&lt;code&gt;
petros@ubu-nvme1:~$ sudo add-apt-repository universe
petros@ubu-nvme1:~$ sudo apt update &amp;&amp; sudo apt install
 ↪nvme-cli
&lt;/code&gt;
&lt;/pre&gt;


&lt;p&gt;
Using this utility, you're able to list more details of all connected
NVMe drives (note: the tabular output below has been reformatted and
truncated to better fit here):

&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/data-flash-part-ii-using-nvme-drives-and-creating-nvme-over-fabrics-network" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Mon, 20 May 2019 11:00:00 +0000</pubDate>
    <dc:creator>Petros Koutoupis</dc:creator>
    <guid isPermaLink="false">1340246 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Data in a Flash, Part I: the Evolution of Disk Storage and an Introduction to NVMe</title>
  <link>https://www.linuxjournal.com/content/data-flash-part-i-evolution-disk-storage-and-introduction-nvme</link>
  <description>  &lt;div data-history-node-id="1340244" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/petros-koutoupis" lang="" about="https://www.linuxjournal.com/users/petros-koutoupis" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Petros Koutoupis&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;NVMe drives have paved the way for computing at stellar speeds, but
the technology didn't suddenly appear overnight. It was
through an evolutionary process that we now rely on the very performant
SSD for our primary storage tier.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
Solid State Drives (SSDs) have taken the computer industry
by storm in recent years. The technology is impressive with its high-speed capabilities. It
promises low-latency access to sometimes critical data while
increasing overall performance, at least when compared to what is now
becoming the legacy Hard Disk Drive (HDD). With each passing year, SSD
market shares continue to climb, replacing the HDD in many sectors.
The effects of this are seen in personal, mobile and server
computing.
&lt;/p&gt;

&lt;p&gt;
IBM first unleashed the HDD into the computing world in 1956. By
the 1960s, the HDD became the dominant secondary storage device
for general-purpose computers (&lt;em&gt;emphasis on secondary storage
device&lt;/em&gt;, memory being the first). Capacity and performance were the primary characteristics
defining the HDD. In many
ways, those characteristics continue to define the technology—although,
not in the most positive ways (more details on that shortly).
&lt;/p&gt;

&lt;p&gt;
The first IBM-manufactured hard drive, the 350 RAMAC, was as large as two
medium-sized refrigerators with a total capacity of 3.75MB on
a stack of 50 disks. Modern HDD technology has produced disk drives with
volumes as high as 16TB, specifically with the more recent
Shingled Magnetic Recording (SMR) technology coupled with helium—yes,
that's the same chemical element abbreviated as &lt;em&gt;He&lt;/em&gt; in the
periodic table. The sealed helium gas increases the potential speed of the
drive while creating less drag and turbulence. Being less dense than
air, it also allows more platters to be stacked in the same space used
by 2.5" and 3.5" conventional disk drives.
&lt;/p&gt;

&lt;img src="https://www.linuxjournal.com/sites/default/files/styles/max_650x650/public/u%5Buid%5D/12598f1.jpg" width="640" height="480" alt="""" class="image-max_650x650" /&gt;&lt;p&gt;
&lt;em&gt;Figure 1. A lineup of Standard HDDs throughout Their History
and across All Form Factors
(by Paul R. Potts—Provided by Author, CC BY-SA 3.0 us,
&lt;a href="https://commons.wikimedia.org/w/index.php?curid=4676174"&gt;https://commons.wikimedia.org/w/index.php?curid=4676174&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
A disk drive's performance typically is calculated by the time
required to move the drive's heads to a specific track or cylinder
and the time it takes for the requested sector to move under the
head—that is, the latency. Performance is also measured at the
rate by which the data
is transmitted.
&lt;/p&gt;

&lt;p&gt;
Being a mechanical device, an HDD does not perform nearly as fast as
memory. A lot of moving components add to latency times
and decrease the overall speed by which you can access data (for both read
and write operations).
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/data-flash-part-i-evolution-disk-storage-and-introduction-nvme" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Mon, 29 Apr 2019 11:30:00 +0000</pubDate>
    <dc:creator>Petros Koutoupis</dc:creator>
    <guid isPermaLink="false">1340244 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>The High-Performance Computing Issue</title>
  <link>https://www.linuxjournal.com/content/high-performance-computing-issue</link>
  <description>  &lt;div data-history-node-id="1340267" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/bryan-lunduke" lang="" about="https://www.linuxjournal.com/users/bryan-lunduke" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Bryan Lunduke&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;Since the dawn of computing, hardware engineers have had one goal that's stood out above all
the rest: speed.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
Sure, computers have many other important qualities (size, power consumption, price and so on), but
nothing captures our attention like the never-ending quest for faster hardware (and software to power
it). Faster drives. Faster RAM. Faster processors. Speed, speed and more speed. [Insert manly
grunting sounds here.]
&lt;/p&gt;

&lt;p&gt;
What's the first thing that happens when a new CPU is released? Benchmarks to compare it against the
last batch of processors.
&lt;/p&gt;

&lt;p&gt;
What happens when a graphics card is unveiled? Reviewers quickly load up whatever the most
graphically demanding video game is and see just how it stacks up to the competition in frame-rate and
resolution. Power and speed captures the attention of everyone from software engineers to gamers
alike.
&lt;/p&gt;

&lt;p&gt;
Nowhere is this never-ending quest for speed more apparent than in the high-performance computing
(HPC) space. Built to handle some of the most computationally demanding work ever conceived by man,
these supercomputers are growing faster by the day—and Linux is right there, powering just about
all of them.
&lt;/p&gt;

&lt;p&gt;
In this issue of &lt;em&gt;Linux Journal&lt;/em&gt;, we take a stroll through the history of supercomputers, from its
beginnings (long before Linux was a gleam in Linus Torvalds' eye...heck, long before Linus Torvalds
was gleam in his parents' eyes) all the way to the present day where Linux absolutely dominates the
Supercomputer and HPC world.
&lt;/p&gt;

&lt;p&gt;
Then we take a deep dive into one of the most critical components of computing (affecting both desktop
and supercomputers alike): storage.
&lt;/p&gt;

&lt;p&gt;
Petros Koutoupis, Senior Platform Architect on IBM's Cloud Object Storage, creator of RapidDisk
(Linux kernel modules for RAM drives and caching) and &lt;em&gt;LJ&lt;/em&gt; Editor at Large, gives an overview of the history of computer
storage leading up to the current, ultra-fast SSD and NVMe drives.
&lt;/p&gt;

&lt;p&gt;
Once you're up to speed (see what I did there?) on NVMe storage, Petros then gives a
detailed—step-by-step—walk-through of how to best utilize NVMe drives with Linux, including how to set up your
system to have remote access to NVMe resources over a network, which is just plain cool.
&lt;/p&gt;

&lt;p&gt;
Taking a break from talking about the fastest computers the Universe has ever known, let's turn our
attention to a task that almost every single one of us tackles at least occasionally.
&lt;/p&gt;

&lt;p&gt;
Photography.
&lt;/p&gt;

&lt;p&gt;
Professional photographer Carlos Echenique provides an answer to the age-old question: is it
possible for a professional photographer to use a FOSS-based workflow? (Spoiler: the answer is yes.)
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/high-performance-computing-issue" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Fri, 30 Nov 2018 15:43:31 +0000</pubDate>
    <dc:creator>Bryan Lunduke</dc:creator>
    <guid isPermaLink="false">1340267 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Linux and Supercomputers</title>
  <link>https://www.linuxjournal.com/content/linux-and-supercomputers</link>
  <description>  &lt;div data-history-node-id="1340269" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/bryan-lunduke" lang="" about="https://www.linuxjournal.com/users/bryan-lunduke" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Bryan Lunduke&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;As we sit here, in the year Two Thousand and Eighteen (better known as "the future,
where the robots live"), our beloved Linux is the undisputed king of supercomputing.
Of the top 500 supercomputers in the world, approximately zero of them don't run Linux
(give or take...zero).&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
The most complicated, powerful computers in the world—performing the most intense
processing tasks ever devised by man—all rely on Linux. This is an amazing feat
for the little Free Software Kernel That Could, and one heck of a great bragging point
for Linux enthusiasts and developers across the globe.
&lt;/p&gt;

&lt;p&gt;
But it wasn't always this way.
&lt;/p&gt;

&lt;p&gt;
In fact, Linux wasn't even a blip on the supercomputing radar until the late 1990s.
And, it took another decade for Linux to gain the dominant position in the fabled "Top
500" list of most powerful computers on the planet.
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
A Long, Strange Road&lt;/span&gt;


&lt;p&gt;
To understand how we got to this mind-blowingly amazing place in computing history, we
need to go back to the beginning of "big, powerful computers"—or at least, much
closer to it: the early 1950s.
&lt;/p&gt;

&lt;p&gt;
Tony Bennett and Perry Como ruled the airwaves, &lt;em&gt;The Day The Earth Stood
Still&lt;/em&gt; was
in theaters, &lt;em&gt;I Love Lucy&lt;/em&gt; made its television debut, and holy moly, does that feel
like a long time ago.
&lt;/p&gt;

&lt;p&gt;
In this time, which we've established was a long, long time ago, a gentleman named
Seymour Cray—whom I assume commuted to work on his penny-farthing and rather
enjoyed a rousing game of hoop and stick—designed a machine for the Armed Forces
Security Agency, which, only a few years before (in 1949), was created to handle
cryptographic and electronic intelligence activities for the United States military.
This new agency needed a more powerful machine, and Cray was just the man (hoop and
stick or not) to build it.
&lt;/p&gt;

&lt;img src="https://www.linuxjournal.com/sites/default/files/styles/max_650x650/public/u%5Buid%5D/12609f1.jpg" width="650" height="341" alt="""" class="image-max_650x650" /&gt;&lt;p&gt;&lt;em&gt;Figure 1. Seymour Cray, Father of the Supercomputer (from &lt;a href="http://www.startribune.com/minnesota-history-seymour-cray-s-mind-worked-at-super-computer-speed/289683511/"&gt;http://www.startribune.com/minnesota-history-seymour-cray-s-mind-worked-at-super-computer-speed/289683511&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
This resulted in a machine known as the Atlas II.
&lt;/p&gt;

&lt;p&gt;
Weighing a svelte 19 tons, the Atlas II was a groundbreaking powerhouse—one of the
first computers to use Random Access Memory (aka "RAM") in the form of 36 Williams
Tubes (Cathode Ray Tubes, like the ones in old CRT TVs and monitors, capable of
storing 1024 bits of data each).
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/linux-and-supercomputers" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Thu, 29 Nov 2018 13:00:00 +0000</pubDate>
    <dc:creator>Bryan Lunduke</dc:creator>
    <guid isPermaLink="false">1340269 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>ONNX: the Open Neural Network Exchange Format</title>
  <link>https://www.linuxjournal.com/content/onnx-open-neural-network-exchange-format</link>
  <description>  &lt;div data-history-node-id="1339771" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/user/800928" lang="" about="https://www.linuxjournal.com/user/800928" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Braddock Gaskill&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;
An open-source battle is being waged for the soul of artificial
intelligence. It is being fought by industry titans, universities and
communities of machine-learning researchers world-wide. This article
chronicles one small skirmish in that fight: a standardized file format
for neural networks. At stake is the open exchange of data among a
multitude of tools instead of competing monolithic frameworks.
&lt;/em&gt;&lt;/p&gt;



&lt;p&gt;
The good news is that the battleground is Free and Open. None of the
big players are pushing closed-source solutions. Whether it is Keras and
Tensorflow backed by Google, MXNet by Apache endorsed by Amazon, or Caffe2
or PyTorch supported by Facebook, all solutions are open-source software.
&lt;/p&gt;
&lt;p&gt;
Unfortunately, while these projects are &lt;em&gt;open&lt;/em&gt;, they are not
&lt;em&gt;interoperable&lt;/em&gt;. Each framework constitutes a complete stack that
until recently could not interface in any way with any other framework.
A new industry-backed standard, the Open Neural Network Exchange format,
could change that.
&lt;/p&gt;

&lt;p&gt;
Now, imagine a world where you can train a neural network in Keras,
run the trained model through the NNVM optimizing compiler and
deploy it to production on MXNet. And imagine that is just one of
countless combinations of interoperable deep learning tools, including
visualizations, performance profilers and optimizers. Researchers and
DevOps no longer need to compromise on a single toolchain that provides
a mediocre modeling environment and so-so deployment performance.
&lt;/p&gt;

&lt;p&gt;
What is required is a standardized format that can express any machine-learning model and store trained parameters and weights, readable and
writable by a suite of independently developed software.
&lt;/p&gt;

&lt;p&gt;
Enter the &lt;a href="http://onnx.ai"&gt;Open Neural Network Exchange
Format&lt;/a&gt; (ONNX).
&lt;/p&gt;

&lt;span class="h3-replacement"&gt;
The Vision&lt;/span&gt;

&lt;p&gt;
To understand the drastic need for interoperability with a standard like
ONNX, we first must understand the ridiculous requirements we have for
existing monolithic frameworks.
&lt;/p&gt;

&lt;p&gt;
A casual user of a deep learning framework may think of it as a language
for specifying a neural network. For example, I want 100 input neurons,
three fully connected layers each with 50 ReLU outputs, and a softmax on
the output. My framework of choice has a domain language to specify this
(like Caffe) or bindings to a language like Python with a clear API.
&lt;/p&gt;

&lt;p&gt;
However, the specification of the network architecture is only the tip of
the iceberg. Once a network structure is defined, the framework still
has a great deal of complex work to do to make it run on your CPU or
GPU cluster.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/onnx-open-neural-network-exchange-format" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Wed, 25 Apr 2018 14:19:00 +0000</pubDate>
    <dc:creator>Braddock Gaskill</dc:creator>
    <guid isPermaLink="false">1339771 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>PSSC Labs' PowerServe HPC Servers and PowerWulf HPC Clusters</title>
  <link>https://www.linuxjournal.com/content/pssc-labs-powerserve-hpc-servers-and-powerwulf-hpc-clusters</link>
  <description>  &lt;div data-history-node-id="1339524" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/james-gray" lang="" about="https://www.linuxjournal.com/users/james-gray" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;James Gray&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
In its quest to provide customers the latest and best computing solutions that
deliver relentless performance with the absolute lowest TCO, &lt;a href="http://www.pssclabs.com"&gt;PSSC Labs&lt;/a&gt; has
supercharged two server solutions with next-generation processing power. 
&lt;/p&gt;
&lt;img src="http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12237f4.png" alt="" title="" class="imagecache-large-550px-centered" /&gt;&lt;p&gt;
The breakthrough
technology of Intel's new Xeon Scalable Processors has been integrated into
PSSC Labs' PowerServe HPC line of servers and the PowerWulf line of HPC
clusters, a move that guarantees performance capable of handling cutting-edge
computing tasks, such as real-time analytics, virtualized infrastructure and
high-performance computing. 
&lt;/p&gt;

&lt;p&gt;
Besides the advanced architecture, the new processors
offer a diverse suite of platform innovations for enhanced application performance
including Intel AVX-512, Intel Mesh Architecture, Intel QuickAssist, Intel Optane
SSDs and Intel Omni-Path Fabric. 
&lt;/p&gt;

&lt;p&gt;
Both PSSC Labs solutions are designed for reliable,
flexible, HPC solutions targeted at government, academic and commercial
environments. Some examples of sectors that will benefit from the new performance
include design and engineering, life and physical sciences, financial services and
machine/deep learning.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/pssc-labs-powerserve-hpc-servers-and-powerwulf-hpc-clusters" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Mon, 16 Oct 2017 14:43:17 +0000</pubDate>
    <dc:creator>James Gray</dc:creator>
    <guid isPermaLink="false">1339524 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>LINBIT's DRBD Top</title>
  <link>https://www.linuxjournal.com/content/linbits-drbd-top</link>
  <description>  &lt;div data-history-node-id="1339517" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/james-gray" lang="" about="https://www.linuxjournal.com/users/james-gray" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;James Gray&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
Many proprietary high-availability (HA) software providers require users to pay
extra for system-management capabilities. Bucking this convention and driving down
costs is &lt;a href="https://www.linbit.com/en"&gt;LINBIT&lt;/a&gt;, whose DRBD HA software solution, part of the Linux kernel since
2009, powers thousands of digital enterprises. 
&lt;/p&gt;

&lt;p&gt;
The cost savings originate from
LINBIT's DRBD Top, a new software tool to simplify the management of the
LINBIT DRBD application. Via DRBD Top's unified graphical interface,
administrators can navigate their DRBD resources conveniently without typing
multiple commands. 
&lt;/p&gt;

&lt;p&gt;
Available on GitHub, DRBD Top provides critical status,
assessment and troubleshooting capabilities for administrators who manage HA
clusters, especially those with greater than two nodes.
&lt;/p&gt;
&lt;img src="http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12237f2.png" alt="" title="" class="imagecache-large-550px-centered" /&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/linbits-drbd-top" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Wed, 11 Oct 2017 14:19:54 +0000</pubDate>
    <dc:creator>James Gray</dc:creator>
    <guid isPermaLink="false">1339517 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>JMR SiloStor NVMe SSD Drives</title>
  <link>https://www.linuxjournal.com/content/jmr-silostor-nvme-ssd-drives</link>
  <description>  &lt;div data-history-node-id="1339460" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/james-gray" lang="" about="https://www.linuxjournal.com/users/james-gray" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;James Gray&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
Compute-intensive workflows are the environments in which the newly developed &lt;a href="http://jmr.com"&gt;JMR&lt;/a&gt;
SiloStor NVMe family of SSD drives is designed to show its colors. Ideal for HPC,
data centers, genome research, content creation, CGI/animation, codec processing
and gaming, among others, the SiloStor drive family comes in three NVMe/PCIe
configurations: single-drive module, x4 PCIe connectivity in 512GB/1TB/2TB
capacities; dual-drive, x8 connectivity in 1TB/2TB/4TB capacities; and quad-drive
module, x8 connectivity, available in 2TB/4TB/8TB capacities. The dual- and
quad-drive cards incorporate a PCIe switch, and the drives can be striped (on a
single card) for additional performance. 
&lt;/p&gt;
&lt;img src="http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12217f1.jpg" alt="" title="" class="imagecache-large-550px-centered" /&gt;&lt;p&gt;
All SiloStor designs incorporate active
heatsink coolers on the drive modules themselves, maintaining low operating
temperatures even during intensive sequential write operations. Key performance
metrics include &lt;1 mS average access time of &lt;1 mS, 2 million hours MTBF, 1,200
TBW minimum endurance, 90,000/70,000 IOPS random 4K read/write speed and
4,000/3,000 MB/sequential read/write speed.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/jmr-silostor-nvme-ssd-drives" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Wed, 09 Aug 2017 15:50:57 +0000</pubDate>
    <dc:creator>James Gray</dc:creator>
    <guid isPermaLink="false">1339460 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>SUSE Linux Enterprise High Availability Extension</title>
  <link>https://www.linuxjournal.com/content/suse-linux-enterprise-high-availability-extension</link>
  <description>  &lt;div data-history-node-id="1339336" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/james-gray" lang="" about="https://www.linuxjournal.com/users/james-gray" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;James Gray&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
Historically, data replication has been available only piecemeal through
proprietary vendors. In a quest to remediate history, &lt;a href="http://suse.com"&gt;SUSE&lt;/a&gt; and partner
&lt;a href="http://linbit.com"&gt;LINBIT&lt;/a&gt; announced a solution that promises to change the economics of data
replication. The two companies' collaborative effort is the headliner in
the updated SUSE Linux Enterprise High Availability Extension, which now
includes LINBIT's integrated geo-clustering technology. 
&lt;/p&gt;
&lt;img src="http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12150f7b.png" alt="" title="" class="imagecache-large-550px-centered" /&gt;&lt;p&gt;
Providing a new
capability to replicate data across unlimited distances, LINBIT has enhanced
SUSE's high availability solution that is built on open-source software
and runs on commodity hardware. The LINBIT solution guards against failures
or disasters by providing policy-driven mechanisms for customer
applications, and data, to continue operations in another geographically
dispersed data center. 
&lt;/p&gt;
&lt;img src="http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1000009/12150f7a.jpg" alt="" title="" class="imagecache-large-550px-centered" /&gt;&lt;p&gt;
SUSE Linux Enterprise High Availability Extension is
an integrated suite of open-source clustering technologies—from LINBIT and
others—that enables customers to eliminate single points of failure, thus
helping to maintain business continuity, enable compliance, protect data
integrity, maintain isolation for multiple tenants and reduce unplanned
downtime for mission-critical workloads.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/suse-linux-enterprise-high-availability-extension" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Wed, 29 Mar 2017 15:25:00 +0000</pubDate>
    <dc:creator>James Gray</dc:creator>
    <guid isPermaLink="false">1339336 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Three EU Industries That Need HPC Now</title>
  <link>https://www.linuxjournal.com/content/three-eu-industries-need-hpc-now</link>
  <description>  &lt;div data-history-node-id="1339340" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/ted-schmidt" lang="" about="https://www.linuxjournal.com/users/ted-schmidt" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Ted Schmidt&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;The success of High Performance Computing (HPC) relies in no small part on the OpenPOWER Foundation, which was founded in 2013. The reason this open ecosystem is so important is that it provided members open access to the IBM POWER8 technology, which resulted in huge advances in innovation. One of those innovations came in the form of the NVIDIA GPU accelerator, which not only provides improved graphics capabilities, but also assumes some of the computational load stemming from simulations. IBM POWER8 servers are already capable of clock speeds of more than 4GHz and of providing 96 simultaneous threads. Include NVIDIA Tesla GPU Accelerators, and the result is &lt;a href="http://www-03.ibm.com/systems/uk/power/hardware/hpc/outthink.html?cm_mmc=Earned-_-Systems_Systems+-+High-Performance+Computing-_-GB_GB-_-UK-LinuxJournal-Articol-ThreeEUIndustries-Post3-EX-AnHPCSystemThatIsExtremelyFast&amp;cm_mmca1=000016BN&amp;cm_mmca2=10000539&amp;"&gt;an HPC system that is extremely fast&lt;/a&gt;, which ends up solving some tricky problems in three key industries.
&lt;p&gt;
&lt;/p&gt;
&lt;strong&gt;Auto Industry Product Design and Testing&lt;/strong&gt;
&lt;br /&gt;
The EU auto industry faces increasing pressures to provide more fuel-efficient and safer vehicles, while at the same time providing new products like reliable, electric vehicles and even self-driving vehicles.  Although the European Automobile Manufacturer’s Association (ACEA) continues to predict growth in the EU market, margins of 1-3% keep EU auto manufacturers looking for ways to gain efficiencies and cut costs. 
&lt;p&gt;
&lt;/p&gt;
One place the answers to these challenges can be found is in the ability to consume and make sense of data from a multitude of sources, and do it quickly and effectively. Fuel efficiency, for instance, is a product of data from not only engine components, but also braking systems, batteries, tires and the external environment as well. Self-driving cars must handle even more complex datasets, and they must do it in a reliable and safe way.
&lt;p&gt;
&lt;/p&gt;
&lt;a href="https://bs.serving-sys.com/serving/adServer.bs?cn=trd&amp;mc=click&amp;pli=20803244&amp;PluID=0&amp;ord=[timestamp]"&gt;&lt;img src="http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u800391/POU12637-USEN-00_300x250_article_2_3_0.jpg" alt="There's HPC. And there's HPC on POWER. IBM." title="" class="imagecache-large-550px-centered" /&gt;&lt;/a&gt;&lt;img src="https://bs.serving-sys.com/serving/adServer.bs?cn=display&amp;c=19&amp;mc=imp&amp;pli=20803244&amp;PluID=0&amp;ord=[timestamp]&amp;rtu=-1" /&gt;&lt;p&gt;
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/three-eu-industries-need-hpc-now" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Sat, 25 Mar 2017 06:03:14 +0000</pubDate>
    <dc:creator>Ted Schmidt</dc:creator>
    <guid isPermaLink="false">1339340 at https://www.linuxjournal.com</guid>
    </item>

  </channel>
</rss>
