• over_clox@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    15 hours ago

    From a programmer and optimizer perspecpective, I always prefer the original binary definitions for memory sizes.

    Like, I prefer the speed and convenience of being able to perform bit shifts within a binary system to quickly multiply and divide by powers of 2, without the headache of having to think in decimal.

    The whole base 10 thing is more meant for the average consumer and marketing, not those that actually understand the binary nature of the machine.

    • brygphilomena@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 hours ago

      I always still use powers of 2 for everything. Even though I can have any hdd size I want with virtualization, I still do it as power of 2.

      For anything consumer facing marketing, sure it’s 1000. But it just makes sense to keep programming with 1024.

    • CombatWombatEsq@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      13 hours ago

      For sure. I do think, though, that most people expect the base 10 versions, so even though I prefer to work in kib, I always quote kb in user-facing documentation.

      • over_clox@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 hours ago

        No, I learned programming back when programmers actually worked in binary and sexadecimal (Ok IBM fanboys, they call that hexadecimal now, since IBM doesn’t like sex).

        I still use the old measurement system, save for the rare occasions I gotta convert for the average layman terms.

        It tells a lot really quick when talking to someone else, when they don’t understand why 2^10 (1024) is the underlying standard that the CPU likes.

        Oh wait, there’s a 10 in (2^10)…

        Wonder where that came from?.. 🤔

        I dunno, but bit shift binary multiplications and divisions are super fast in the integer realm, but get dogshit slow when performed in the decimal realm.

          • over_clox@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            10 hours ago

            But if you fall into the folly of decimal on a device inherently meant to process binary, then you might allocate an array of 1000 items, rather than the natural binary of 1024, leading to a chance of a memory overflow…

            Like, sell by the 1000, but program by the 1024.

    • Xavienth@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      14 hours ago

      It’s less for the consumer and more of an SI/IEC/BIPM thing. If the prefix k means about 1000 depending on the context, that can cause all sorts of problems. They maintain that k means strictly 1000, for good reason.

    • qprimed@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      14 hours ago

      KiB for FTW :-)

      been using it for years now when i need to be precise. colloquially, everyone i know still understands that contextually, K is 2^10

      • over_clox@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        14 hours ago

        I’m not all about jamming a random i into terminology that was already well defined decades ago. But hey, you go for it if that’s what you prefer.

        By the way, ‘for FTW’ makes about as much sense as saying ‘ATM machine’, it’s redundant.

        • qprimed@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          13 hours ago

          yup! serves me right for responding while rushing out of the door. gonna leave that here for posterity.

          edit: and… switching networks managed to triple post this response. i think thats enough internet for today.

        • nous@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          12 hours ago

          KiB was defined decades ago… Way back in 1999. Before that it was not well defined. kb could mean binary or decimal depending on what or who was doing the measurements.

          • over_clox@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            11 hours ago

            And? I started programming back in 1996, back when most computer storage and memory measurements were generally already well defined, around the base 2 binary system.

            Floppy disks were about the only exception, 1.44MB was indeed base 10, but built on top of base 2 for cluster size. It was indeed a clusterfuck. 1.44MB was technically 1.38MiB when using modern terms.

            I do wonder sometimes how many buffer overflow errors and such are the result of ‘programmers’ declaring their arrays in base 10 (1000) rather than base 2^10 (1024)… 🤔

        • qprimed@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 hours ago

          yup! serves me right for responding while rushing out of the door. gonna leave this here for posterity.