• Treczoks@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      9 hours ago

      I once worked on a supercomputer in the olden times - this was before Linux. You basically wrote your calculation application on a front-end system with a cross-compiler. It was then transferred to the target machines’ RAM and ran there. Your application was the only thing running on that machine. No OS, no drivers, no interrupts (at least not that I knew of). Just your application directly on the hardware. Once your program was finished, the RAM was read back, and you could analyze the dump to extract your results.

    • olosta@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      11 hours ago

      No it’s not, other Unix were on the list until 2017, there was even some Windows and macos for a time.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 hours ago

          “Clustering” in this context is a bit different. Do you have a network of some kind? Congratulations, you can do an HPC cluster.

          Usually in the context you describe, it’s usually referring to HA clustering or some application specific clustering. HPC clustering is a lot less picky. If someone can port an MPI runtime to it, you can make a go of it.