• 57 Posts
  • 608 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle


  • When PRs begin with a headline and checklist the GitHub hover-preview becomes useless. When the PR description begins with the summation of the change, it is very useful.

    Most of the time I see headlines and check lists in tickets I create or contributions I create PRs for, I feel stifled and like I have to produce something very inefficient or convoluted.

    The worst I have seen is when, at work, I had to create bug tickets for a new system in a service desk to a third party, and they had a very excessive, guided, formalized submission form [for dumb users]. More than once, I wrote the exact same thing three times into three separate text boxes that required input. (Something like “describe what is wrong”, “describe what happens”, “describe how to reproduce”.) Something that I could have described well, concise, fully and correctly in one or two sentences or paragraphs became an excessively spread, formalized mess. I’m certainly not your average end user, but man that annoyed me. And the response of “we found this necessary” was certainly not for my kind of users, maybe not even experience with IT personnel.

    At work, I’m glad I have a small and close enough team where I can guide colleagues and new team members into good or at least decent practice.

    Checklists can be a good thing, if processes can be formalized, can serve as guidance for the developer, and proof of consideration for the reviewer. At the same time, they can feel inappropriate and like noise in other cases.

    I’ve been using horizontal line separators to separate description from test description and aside/scoping/wider context and considerations - maybe I will start adding headlines on those to be more explicit.


  • I use GitLab diffs in single-file-view mode, TortoiseGit Merge when it exceeds what GitLab can reasonably display (including block indent changes I can ignore in TortoiseGit Merge or moves I can better track), and WinMerge (previously I used KDiff) for manual copy-paste text diffing (like copying blocks from the code change diff to compare similar, categorically similar code, or code moves, etc)







  • These terms included affirming the statement that we ‘do not, and will not during the term of this financial assistance award, operate any programs that advance or promote DEI [diversity, equity, and inclusion], or discriminatory equity ideology in violation of Federal anti-discrimination laws,’

    Insane. I can’t even fathom adding such a condition. And to a well established org with a positive track record.

    Toxic offer. Wouldn’t even be able to say that inclusivity is a good thing.





  • While I agree with the later (or middle?) points, maybe for different reasons or maybe I would have reasoned differently, I mostly disagree with the earlier points.

    Any really important comments get lost in the noise

    What kind of comments are they using?

    When I leave comments on GitLab they’re threads that get resolved explicitly. GitHub also uses resolvable threads. The assignee/creator goes through them one by one, and marks them as resolved when they feel they’re done with them. Nothing gets lost like that.

    I also make use of ‘⚠’ to mark significant/blocking comments and bullet points. Other labels, like or similar to conventional comment prefixes, like “thought:” or “note:”, can indicate other priorities and significance of comments.

    Instead of leaving twenty comments, I’d suggest leaving a single comment explaining the stylistic change you’d like to make, and asking the engineer you’re reviewing to make the correct line-level changes themselves.

    I kinda agree, but I often leave the comment on the/a code in question, and often add a code change suggestion to visualize/indicate what I mean. This comment may stand in and refer to all other occurrences of this kind of thing. It doesn’t have to apply exclusively on those lines.

    Otherwise you’re putting your colleagues in an awkward position. They can either accept all your comments to avoid conflict, adding needless time and setting you up as the de facto gatekeeper for all changes to the codebase, or they can push back and argue on each trivial point, which will take even more time. Code review is not the time for you to impose your personal taste on a colleague.

    I make sure that my team has a common understanding of, and the comments adding sufficient context/pretext to make it clear, that code change suggestions and “I would have [because]” are usually or in general can be freely rejected, unless specified otherwise. Often, comments include information of how important or not changes are to me, in comments themselves, and/or comments summarizing a review iteration (with a set of comments). The comments can also serve as a spark for discussion about solutions and approaches, common goals or eventual goals of the changed code that may be targeted after the code changes currently under review.

    Review with a “will this work” filter, not with a “is this exactly how I would have done it” filter

    I wouldn’t want to do it like that, specifically. It’s a question of weighing risks and medium and long term maintainability vs delivery, work, changeset, and review complexity and delay. Rather than “will this work”, I ask my self, “is this good enough [within context]”.

    Leave a small number of well-thought-out comments, instead of dashing off line comments as you go and ending up with a hundred of them

    Maybe I’ve had too many juniors to get into this mindset. But I’ve definitely had numerous times where I did many comments on reviews, even again on successive iterations. Besides reviewing the code technically, the review can also serve as a form of communication, assimilation, and teaching (project an codebase at hand, work style, and other things).

    It’s good to talk about concerns, issues, and frustrations, as well as upsides of doing so and working like that. Retrospectives and personal talks or discussions can help with that. Apart from other discussion, planing, and support meetings, the review is the interface between people and a great way to communicate.



  • Visual Studio provides some kind of AI even without Copilot.

    Inline (single line) completions - I not always but regularly find quite useful

    Repeated edits continuation - I haven’t seen them in a while, but have use them on maybe two or three occasions. I am very selective about these because they’re not deterministic like refractorings and quick actions, which I can be confident in correctness even when doing those across many files and lines. For example invert if changes many line indents; if an LLM does that change you can’t be sure it didn’t change any of those lines.

    Multi-line completions/suggestions - I disabled those because it offsets/moves away the code and context I want to see around it, as well as noisy movement, for - in my limited experience - marginal if any use[fulness].

    In my company we’re still in selective testing phase regarding customer agreements and then source code integration into AI providers. My team is not part of that yet. So I don’t have practical experience regarding any analysis, generating, or chat functionality with project context. I’m skeptical but somewhat interested.

    I did do private projects, I guess one, a Nushell plugin in Rust, which is largely unfamiliar to me, and tried to make use of Copilot generating methods for me etc. It felt very messy and confusing. Generated code was often not correct or sound.

    I use Phind and more recently more ChatGPT for research/search queries. I’m mindful of the type of queries I use and which provider or service I use. In general, I’m a friend of ref docs, which is the only definite source after all. I’m aware of and mindful of the environmental impact of indirectly costly free AI search/chat. Often, AI can have a quicker response to my questions than searching via search ending and on and in upstream docs. Especially when I am familiar with the tech, and can relatively quickly be reminded, or guide the AI when it responds bullshit or suboptimal or questionable stuff, or also relatively quickly disregard the entire AI when it doesn’t seem capable to respond to what I am looking for.







  • One of the two associations is in power and actively dismantling society. The other develops a technical product and runs a Lemmy instance many people and other instances have blocked.

    Handling or concluding them a bit differently seems quite fine to me.

    That being said, I’ve seen plenty of Lemmy dev connection criticism on this platform. I can’t say the same about FUTO.