• rekabis@programming.dev
    link
    fedilink
    arrow-up
    14
    ·
    1 hour ago

    The fact that “AI” hallucinates so extensively and gratuitously just means that the only way it can benefit software development is as a gaggle of coked-up juniors making a senior incapable of working on their own stuff because they’re constantly in janitorial mode.

  • Charlxmagne@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    1 hour ago

    This is what happens when you don’t know what your own code does, you lose the ability to manage it, that is precisely why AI won’t take programmer’s jobs.

  • M0oP0o@mander.xyz
    link
    fedilink
    arrow-up
    44
    ·
    2 hours ago

    Ha, you fools still pay for doors and locks? My house is now 100% done with fake locks and doors, they are so much lighter and easier to install.

    Wait! why am I always getting robbed lately, it can not be my fake locks and doors! It has to be weirdos online following what I do.

  • Hilarious and true.

    last week some new up and coming coder was showing me their tons and tons of sites made with the help of chatGPT. They all look great on the front end. So I tried to use one. Error. Tried to use another. Error. Mentioned the errors and they brushed it off. I am 99% sure they do not have the coding experience to fix the errors. I politely disconnected from them at that point.

    What’s worse is when a noncoder asks me, a coder, to look over and fix their ai generated code. My response is “no, but if you set aside an hour I will teach you how HTML works so you can fix it yourself.” Never has one of these kids asking ai to code things accepted which, to me, means they aren’t worth my time. Don’t let them use you like that. You aren’t another tool they can combine with ai to generate things correctly without having to learn things themselves.

    • Thoven@lemdro.id
      link
      fedilink
      English
      arrow-up
      30
      ·
      2 hours ago

      100% this. I’ve gotten to where when people try and rope me into their new million dollar app idea I tell them that there are fantastic resources online to teach yourself to do everything they need. I offer to help them find those resources and even help when they get stuck. I’ve probably done this dozens of times by now. No bites yet. All those millions wasted…

    • MyNameIsIgglePiggle@sh.itjust.works
      link
      fedilink
      arrow-up
      13
      ·
      2 hours ago

      I’ve been a professional full stack dev for 15 years and dabbled for years before that - I can absolutely code and know what I’m doing (and have used cursor and just deleted most of what it made for me when I let it run)

      But my frontends have never looked better.

  • Takumidesh@lemmy.world
    link
    fedilink
    arrow-up
    22
    arrow-down
    3
    ·
    4 hours ago

    This is satire / trolling for sure.

    LLMs aren’t really at the point where they can spit out an entire program, including handling deployment, environments, etc. without human intervention.

    If this person is ‘not technical’ they wouldn’t have been able to successfully deploy and interconnect all of the pieces needed.

    The AI may have been able to spit out snippets, and those snippets may be very useful, but where it stands, it’s just not going to be able to, with no human supervision/overrides, write the software, stand up the DB, and deploy all of the services needed. With human guidance sure, but with out someone holding the AIs hand it just won’t happen (remember this person is ‘not technical’)

    • idk ive seen some crazy complicated stuff woven together by people who cant code. I’ve got a friend who has no job and is trying to make a living off coding while, for 15+ years being totally unable to learn coding. Some of the things they make are surprisingly complex. Tho also, and the person mentioned here may do similarly, they don’t ONLY use ai. They use Github alot too. They make nearly nothing themself, but go thru github and basically combine large chunks of code others have made with ai generated code. Somehow they do it well enough to have done things with servers, cryptocurrency, etc… all the while not knowing any coding language.

    • MyNameIsIgglePiggle@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      ·
      2 hours ago

      Claude code can make something that works, but it’s kinda over engineered and really struggles to make an elegant solution that maximises code reuse - it’s the opposite of DRY.

      I’m doing a personal project at the moment and used it for a few days, made good progress but it got to the point where it was just a spaghetti mess of jumbled code, and I deleted it and went back to implementing each component one at a time and then wiring them together manually.

      My current workflow is basically never let them work on more than one file at a time, and build the app one component at a time, starting at the ground level and then working in, so for example:

      Create base classes that things will extend, Then create an example data model class, iterate on that architecture A LOT until it’s really elegant.

      Then Ive been getting it to write me a generator - not the actual code for models,

      Then (level 3) we start with be UI.layer, so now we make a UI kit the app will use and reuse for different components

      Then we make a UI component that will be used in a screen. I’m using flutter as an example so It would be a stateless component

      We now write tests for the component

      Now we do a screen, and I import each of the components.

      It’s still very manual, but it’s getting better. You are still going to need a human cider, I think forever, but there are two big problems that aren’t being addressed because people are just putting their head in the sand and saying nah can’t do it, or the clown op in the post who thinks they can do it.

      1. Because dogs be clownin, the public perception of programming as a career will be devalued “I’ll just make it myself!” Or like my rich engineer uncle said to me when I was doing websites professionally - a 13 year old can just make a website, why would I pay you so much to do it. THAT FUCKING SUCKS. But a similar attitude has existed from people “I’ll just hire Indians”. This is bullshit, but perception is important and it’s going to require you to justify yourself for a lot more work.

      2. And this is the flip side good news. These skills you have developed - it’s is going to be SO MUCH FUCKING HARDER TO LEARN THEM. When you can just say “hey generate me an app that manages customers and follow ups” and something gets spat out, you aren’t going to investigate the grind required to work out basic shit. People will simply not get to the same level they are now.

      That logic about how to scaffold and architect an app in a sensible way - USING AI TOOLS - is actually the new skillset. You need to know how to build the app, and then how to efficiently and effectively use the new tools to actually construct it. Then you need to be able to do code review for each change.

      </rant>

    • nick@midwest.social
      link
      fedilink
      arrow-up
      4
      ·
      2 hours ago

      Mmmmmm no, Claude definitely is. You have to know what to ask it, but I generated and entire deadman’s switch daemon written in go in like an hour with it, to see if I could.

      • Takumidesh@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        2 hours ago

        So you did one simple program.

        SaaS involves a suite of tooling and software, not just a program that you build locally.

        You need at a minimum, database deployments (with scaling and redundancy) and cloud software deployments (with scaling and redundancy)

        SaaS is a full stack product, not a widget you run on your local machine. You would need to deputize the AI to log into your AWS (sorry, it would need to create your AWS account) and fully provision your cloud infrastructure.

    • qaz@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      2 hours ago

      It’s further than you think. I spoke to someone today about and he told me it produced a basic SaaS app for him. He said that it looked surprisingly okay and the basic functionalities actually worked too. He did note that it kept using deprecated code, consistently made a few basic mistakes despite being told how to avoid it, and failed to produce nontrivial functionalies.

      He did say that it used very common libraries and we hypothesized that it functioned well because a lot of relevant code could be found on GitHub and that it might function significantly worse when encountering less popular frameworks.

      Still it’s quite impressive, although not surprising considering it was a matter of time before people would start to feed the feedback of an IDE back into it.

    • jackeryjoo@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      We just built and deployed a fully functional AWS app for our team entirely written in AI. From the terraform, to the backing API, to the frontend Angular. All AI. I think AI is further along here than you suspect.

      • Takumidesh@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        2 hours ago

        I’m skeptical. You are saying that your team has no hand in the provisioning and you deputized an AI with AWS keys and just let it run wild?

      • hubobes@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 hour ago

        How? We try to adopt AI for dev work for years now and every time the next gen tool or model gets released it fails spectacularly at basic things. And that’s just the technical stuff, I still have no idea on how to tell it do implement our use cases as it simply does not understand the domain.

        It is great at building things other have already built and it could train on but we don’t really have a use case for that.

    • Tja@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      3 hours ago

      Might be satire, but I think some “products based on LLMs” (not LLMs alone) would be able to. There’s pretty impressive demos out there, but honestly haven’t tried them myself.

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    41
    ·
    6 hours ago

    An otherwise meh article concluded with “It is in everyone’s interest to gradually adjust to the notion that technology can now perform tasks once thought to require years of specialized education and experience.”

    Much as we want to point and laugh - this is not some loon’s fantasy. This is happening. Some dingus told spicy autocomplete ‘make me a database!’ and it did. It’s surely as exploit-hardened as a wet paper towel, but it functions. Largely as a demonstration of Kernighan’s law.

    This tech is borderline miraculous, even if it’s primarily celebrated by the dumbest motherfuckers alive. The generation and the debugging will inevitably improve to where the machine is only as bad at this as we are. We will be left with the hard problem of deciding what the software is supposed to do.

    • easily3667@lemmus.org
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      4 hours ago

      This industry also spends most of it’s money either changing things that don’t need to change (we optimized the right click menu to remove this item, mostly to fuck your muscle memory) or to avoid changing things (rather than implementing 2fa, banks have implemented 58372658 distinct algorithms for detecting things that might be fraud).

      If you’re just talking about enabling small scale innovation you’re probably right, but if you’re talking about the industry as a whole I think you need to look at what people in industry are actually spending their time on.

      it’s not code.

    • Uli@sopuli.xyz
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      4 hours ago

      Yeah, I’ve been using it heavily. While someone without technical knowledge will surely allow AI to build a highly insecure app, people with more technological knowledge are going to propel things to a level where the less tech savvy will have fewer and fewer pitfalls to fall into.

      For the past two months, I’ve been leveraging AI to build a CUE system that takes a user desire (e.g. “i want to deploy a system with an app that uses a database and a message queue” expressed as a short json) and converts a simple configuration file that unpacks into all the kubernetes manifests required to deploy the system they want to deploy.

      I’m trying to be fully shift-left about it. So, even if the user’s configuration is as simple as my example, it should still use CUE templating to construct the files needed for a full DevSecOps stack - Ingress Controller, KEDA, some kind of logging such as ELK stack, vulnerability scanners, policy agents, etc. The idea is the every stack should at all times be created in a secure state. And extra CUE transformations ensure that you can split the deployment destinations in any type of way, local/onprem, any cloud provider, or any combination thereof.

      The idea is that if I need to swap out a component, I just change one override in the config and the incoming component already knows how to connect to everything and do what the previous component was doing because I’ve already abstracted the component’s expected manifest fields using CUE. So, I’d be able to do something like changing my deployment from one cloud to another with a click of a button. Or build up a whole new fully secure stack for a custom purpose within a few minutes.

      The idea is I could use this system to launch my own social media app, since I’ve been planning the ideal UX for many years. But whether or not that pans out, I can take my CUE system and put a web interface over it to turn it into a mostly automated PaaS. I figure I could undercut most PaaS companies and charge just a few percentage points above cost (using OpenCost to track the expenses). If we get to the point where we have a ton of novices creating apps with AI, I might be in a lucrative position if I have a PaaS that can quickly scale and provide automated secure back ends.

      Of course, I intend on open sourcing the CUE once it’s developed enough to get things off the ground. I’d really love to make money from my creative ideas on a socialized media app that I create, am less excited about gatekeeping this kind of advancement.

      Interested to know if anyone has done this type of project in the past. Definitely wouldn’t have been able to move at nearly this speed without AI.

        • Uli@sopuli.xyz
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          4 hours ago

          I’ve never heard of this before, but you’re right that it sounds very much like what I’m doing. Thank you! Definitely going to research this topic thoroughly now to make sure I’m not reinventing the wheel.

          Based on the sections in that link, I wondered if the MASD project was more geared toward the software dev side or devops. I asked Google and got this AI response:

          “MAD” (Modern Application Development) services, often used in the context of software development, encompass a broader approach that includes DevOps principles and tools, focusing on rapid innovation and cloud-native architectures, rather than solely on systems development.

          So (if accurate), it sounds like all the modernized automation of CI/CD, IaC, and GitOps that I know and love are already engaging in MAD philosophy. And what I’m doing is really just providing the last puzzle piece to fully automate stack architecting. I’m guessing the reason it doesn’t already exist is because a lot of the open source tools I’m relying on to do the heavy lifting inside kubernetes are themselves relatively new. So, hopefully this all means I’m not wasting my time lol

          • Senal@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 hours ago

            AFAICT MASD is an iteration on MDE which incorporates parts of MAD but not in a direct fashion.

            Lots of acronyms there.

            These types of systems do exist, they just aren’t mainstream because there hasn’t been a version of them that could be easily used for general development outside of the specific mid-level niches they are built in.

            I think it’s the goal, but I’ve not seen anything come close yet.

            Admittedly I’m not an authority so it may just be me missing the important things.

            • Uli@sopuli.xyz
              link
              fedilink
              arrow-up
              1
              ·
              3 hours ago

              Thanks for the info. When I searched MASD, it told me instead about MAD, so it’s good to know how they’re differentiated.

              This whole idea comes from working in a shop where most of their DevSecOps practices were fantastic, but we were maintaining fleets of Helm charts (picture the same Helm override sent to lots of different places with slightly different configuration). The unique values for each deployment were buried “somewhere” in all of these very lengthy values.yaml override files. Basically had to did into thousands of lines of code whenever you didn’t know off-hand how a deployment was configured.

              I think when you’re in the thick of a job, people tend to just do what gets the job done, even if it means you’re going to have to do it again in two weeks. We want to automate, but it becomes a battle between custom-fitting and generalization. With the tradeoff being that generalization takes a lot of time and effort to do correctly.

              So, I think plenty of places are “kind of” at this level where they might use CUE to generalize but tend to modify the CUE for each use case individually. But many DevOps teams I suspect aren’t even using CUE, they’re still modifying raw yaml. I think of yaml like plumbing. It’s very important, but best not exposed for manual modification unless necessary. Mostly I just see CUE used to construct and deliver Helm/kubernetes on the cluster, in tools like KubeVela and Radius. This is great for overriding complex Helm manifests with a simple Application .yaml, but the missing niche I’m trying to fill is a tool that provides the connections between different tools and constrains the overall structure of a DevSecOps stack.

              I’d imagine any company with a team who has solved this problem is keeping it proprietary since it represents a pretty big advantage at the moment. But I think it’s just as likely that a project like this requires such a heavy lift before seeing any gain that most businesses simply aren’t focusing on it.

              • Senal@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 hour ago

                My experiences are similar to yours, though less k8’s focused and more general DevSecOps.

                it becomes a battle between custom-fitting and generalisation.

                This is mentioned in the link as “Barely General Enough” I’m not sure i fully subscribe to that specific interpretation but the trade off between generalisation and specialisation is certainly a point of contention in all but the smallest dev houses (assuming they are not just cranking hard coded one-off solutions).

                I dislike the yaml syntax, in the same way i dislike python, but it is pervasive in the industry at the moment so you work with that you have.

                I don’t think yaml is the issue as much as the uncontrolled nature of the usage.

                You’d have the same issue with any format as flexible to interpretation that was being created/edited by hand.

                As in, if the yaml were generated and used automatically as part of a chain i don’t think it’d be an issue, but it is not nearly prescriptive enough to produce the high level kind of model definitions further up the requirements stack.

                note: i’m not saying it couldn’t be done in yaml, i’m saying that it would be a massive effort to shoehorn what was needed into a structure that wasn’t designed for that kind of thing

                Which then brings use back to the generalisation vs specialisation argument, do you create a hyper-specific dsl that allows you only to define things that will work within the boundaries of what you want, does that mean it can only work in those boundaries or do you introduce more general definitions and the complexity that comes with that.

                Whether or not the solution is another layer of abstraction into a different format or something else entirely i’m not sure, but i am sure that raw yaml isn’t it.

                • Uli@sopuli.xyz
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  13 minutes ago

                  Yes, I think yaml’s biggest strength is also its built-in flaw: its flexibility. Yaml as a data structure is built to be so open-ended that it can be no surprise when every component written in Go and using Yaml as a data structure builds their spec in a slightly different way, even when performing the exact same functions.

                  That’s why I yearned for something like CUE and was elated to discover it. CUE provides the control that yaml by its very nature cannot enforce. I can create CUE that defines the yaml structure in general so anything my system builds is valid yaml. And I can create a constraint which builds off of that and defines the structure of a valid kubernetes manifest. Then, when I go to define the CUE that builds up a KubeVela app I can base its constraints on those k8s constraints and add only KubeVela-specific rules.

                  Then I have modules of other components that could be defined as KubeVela Applications on the cluster but I define their constraints agnostically and merge the constraint sets together to create the final yaml in proper KubeVela Application format. And if the component needs to talk to another component, I standardize the syntax of the shared function and then link that function up to whatever tool is currently in use for that purpose.

                  I think it’s a good point that overgeneralization can and does occur and my “one size fits all” approach might not actually fit all. But I’m hoping that if I finish this tool and shop it to a place that thinks it’s overkill, I can just have them tell me which parts they want generalized and define a function to export a subset of my CUE for their needs. And in that scenario, I would flip and become a big proponent of “Just General Enough”. Because then, they can have the streamlined fit-for-purpose system they desire and I can have the satisfaction of not having to do the same work over and over again.

                  But the my fear about going down that road is that it might be less of an export of a subset of code and more of building yet another system that can MAD-style generate my whole CUE system for whatever level of generalization I want. As you say, it just becomes another abstraction layer. Can’t say I’m quite ready to go that far 😅

  • merthyr1831@lemmy.ml
    link
    fedilink
    English
    arrow-up
    120
    ·
    8 hours ago

    AI is yet another technology that enables morons to think they can cut out the middleman of programming staff, only to very quickly realise that we’re more than just monkeys with typewriters.

      • umbrella@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        3 minutes ago

        i have a mobile touchscreen typewriter, but it isn’t very effective at writing code.

      • toynbee@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        6 hours ago

        I was going to post a note about typewriters, allegedly from Tom Hanks, which I saw years and years ago; but I can’t find it.

        Turns out there’s a lot of Tom Hanks typewriter content out there.

      • xthexder@l.sw0.com
        link
        fedilink
        arrow-up
        33
        ·
        6 hours ago

        But then they’d have a dev team who wrote the code and therefore knows how it works.

        In this case, the hackers might understand the code better than the “author” because they’ve been working in it longer.

    • 1024_Kibibytes@lemm.ee
      link
      fedilink
      arrow-up
      72
      ·
      7 hours ago

      That is the real dead Internet theory: everything from production to malicious actors to end users are all ai scripts wasting electricity and hardware resources for the benefit of no human.

        • redd@discuss.tchncs.de
          link
          fedilink
          arrow-up
          15
          ·
          7 hours ago

          Not only internet. Soon everybody will use AI for everything. Lawyers will use AI in court on both sides. AI will fight against AI.

          • devfuuu@lemmy.world
            link
            fedilink
            arrow-up
            15
            ·
            6 hours ago

            I was at a coffee shop the other day and 2 lawyers were discussing how they were doing stuff with ai that they didn’t know anything about and then just to their clients.

            That shit scared the hell out of me.

            And everything will just keep getting worse with more and more common folk eating the hype and brainwash using these highly incorrect tools in all levels of our society everyday to make decisions about things they have no idea about.

            • NABDad@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              ·
              5 hours ago

              I’m aware of an effort to get LLM AI to summarize medical reports for doctors.

              Very disturbing.

              The people driving it where I work tend to be the people who know the least about how computers work.

          • Telorand@reddthat.com
            link
            fedilink
            arrow-up
            4
            ·
            6 hours ago

            It was a time of desolation, chaos, and uncertainty. Brother pitted against brother. Babies having babies.

            Then one day, from the right side of the screen, came a man. A man with a plastic rectangle.

      • atomicbocks@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        11
        ·
        7 hours ago

        The Internet will continue to function just fine, just as it has for 50 years. It’s the World Wide Web that is on fire. Pretty much has been since a bunch of people who don’t understand what Web 2.0 means decided they were going to start doing “Web 3.0” stuff.

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          ·
          6 hours ago

          The Internet will continue to function just fine, just as it has for 50 years.

          Sounds of intercontinental data cables being sliced

  • OccultIconoclast@reddthat.com
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    2 hours ago

    The increasing use of AI is horrifying. Stop playing Frankenstein! Quit creating thinking beings and using them as slaves.

  • rtxn@lemmy.world
    link
    fedilink
    arrow-up
    86
    ·
    9 hours ago

    “If you don’t have organic intelligence at home, store-bought is fine.” - leo (probably)

  • Electric@lemmy.world
    link
    fedilink
    arrow-up
    53
    ·
    8 hours ago

    Is the implication that he made a super insecure program and left the token for his AI thing in the code as well? Or is he actually being hacked because others are coping?

    • grue@lemmy.world
      link
      fedilink
      arrow-up
      140
      ·
      8 hours ago

      Nobody knows. Literally nobody, including him, because he doesn’t understand the code!

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 hours ago

      Potentially both, but you don’t really have to ask to be hacked. Just put something into the public internet and automated scanning tools will start checking your service for popular vulnerabilities.

    • Mayor Poopington@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      8 hours ago

      AI writes shitty code that’s full of security holes, and Leo here has probably taken zero steps to further secure his code. He broadcasts his AI written software and its open season for hackers.

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 hours ago

        Not just, but he literally advertised himself as not being technical. That seems to be just asking for an open season.

    • JustAnotherKay@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      7 hours ago

      He told them which AI he used to make the entire codebase. I’d bet it’s way easier to RE the “make a full SaaS suite” prompt than it is to RE the code itself once it’s compiled.

      Someone probably poked around with the AI until they found a way to abuse his SaaS

    • RedditWanderer@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      7 hours ago

      Doesn’t really matter. The important bit is he has no idea either. (It’s likely the former and he’s blaming the weirdos trying to get in)

  • formulaBonk@lemm.ee
    link
    fedilink
    English
    arrow-up
    42
    ·
    8 hours ago

    Reminds me of the days before ai assistants where people copy pasted code from forums and then you’d get quesitions like “I found this code and I know what every line does except this ‘for( int i = 0; i < 10; i ++)’ part. Is this someone using an unsupported expression?”

      • Moredekai@lemmy.world
        link
        fedilink
        arrow-up
        37
        ·
        8 hours ago

        It’s a standard formatted for-loop. It’s creating the integer variable i, and setting it to zero. The second part is saying “do this while i is less than 10”, and the last part is saying what to do after the loop runs once -‐ increment i by 1. Under this would be the actual stuff you want to be doing in that loop. Assuming nothing in the rest of the code is manipulating i, it’ll do this 10 times and then move on

        • Fermion@feddit.nl
          link
          fedilink
          arrow-up
          6
          ·
          7 hours ago

          I would also add that usually i will be used inside the code block to index locations within whatever data structures need to be accessed. Keeping track of how many times the loop has run has more utility than just making sure something is repeated 10 times.

      • jqubed@lemmy.world
        link
        fedilink
        arrow-up
        11
        ·
        7 hours ago

        @[email protected] posted a detailed explanation of what it’s doing, but just to chime in that it’s an extremely basic part of programming. Probably a first week of class if not first day of class thing that would be taught. I haven’t done anything that could be considered programming since 2002 and took my first class as an elective in high school in 2000 but still recognize it.

      • JustAnotherKay@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        7 hours ago

        for( int i = 0; i < 10; i ++)

        This reads as “assign an integer to the variable I and put a 0 in that spot. Do the following code, and once completed add 1 to I. Repeat until I reaches 10.”

        Int I = 0 initiates I, tells the compiler it’s an integer (whole number) and assigns 0 to it all at once.

        I ++ can be written a few ways, but they all say “add 1 to I”

        I < 10 tells it to stop at 10

        For tells it to loop, and starts a block which is what will actually be looping

        Edits: A couple of clarifications

    • barsoap@lemm.ee
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      55 minutes ago

      i <= 9, you heathen. Next thing you’ll do is i < INT_MAX + 1 and then the shit’s steaming.

      I’m cooked, see thread.

        • barsoap@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          3 hours ago

          I mean i < 10 isn’t wrong as such, it’s just good practice to always use <= because in the INT_MAX case you have to and everything should be regular because principle of least astonishment: That 10 might become a #define FOO 10, that then might become #define FOO INT_MAX, each of those changes look valid in isolation but if there’s only a single i < FOO in your codebase you introduced a bug by spooky action at a distance. (overflow on int is undefined behaviour in C, in case anyone is wondering what the bug is).

          …never believe anyone who says “C is a simple language”. Their code is shoddy and full of bugs and they should be forced to write Rust for their own good.

          • kevincox@lemmy.ml
            link
            fedilink
            arrow-up
            3
            ·
            2 hours ago

            But your case is wrong anyways because i <= INT_MAX will always be true, by definition. By your argument < is actually better because it is consistent from < 0 to iterate 0 times to < INT_MAX to iterate the maximum number of times. INT_MAX + 1 is the problem, not < which is the standard reason to write for loops and the standard for a reason.

            • barsoap@lemm.ee
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              1 hour ago

              You’re right, that’s what I get for not having written a line of C in what 15 years. Bonus challenge: write for i in i32::MIN..=i32::MAX in C, that is, iterate over the whole range, start and end inclusive.

              (I guess the ..= might be where my confusion came from because Rust’s .. is end-exclusive and thus like <, but also not what you want because i32::MAX + 1 panics).

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    22
    ·
    8 hours ago

    “Come try my software! I’m an idiot, so I didn’t write it and have no idea how it works, but you can pay for it.”

    to

    “🎵How could this happen to meeeeee🎵”