

Grab your shovel, time to lucky 10000 some bash.org content!


Grab your shovel, time to lucky 10000 some bash.org content!


Devil’s advocate here: switching to Linux wouldn’t help.
I recently had to set up a public web server for a org that I belonged to. The idea was that I would set everything up in the most secure and unbreakable way I can think of, write documentation on how to do everything, transfer ownership of all the “break glass” credentials and lock my own account once I’m done.
This turned out to be a huge mistake. What was supposed to be some free work for a hobby group turned into a massive pain every day at 3-4am (due to time zone differences)
The person in charge of managing access control couldn’t figure out how wg-easy works. She managed to give her own credentials to EVERYONE who needed access, which obviously didn’t work due to IP conflicts. When pointed out, she modified the IP in every config file, which of course, still didn’t work. It took forever to tell her NOT to share credentials and create new peers for each user.
The biggest problem is some how NOT windows or mac users. There is a single Linux user that is causing the most headaches. When I set up wireguard, I tested on both Linux and Windows, with Linux being what I used. I ran into some minor hiccups with getting split dns to work correctly, but it was relatively easy to fix in Network Manager. I assumed if there are other Linux users they would be able to fix it themselves. Obviously I was wrong.
Said person had DoH enabled in their browser that they didn’t know how to disable, running varieties of “I don’t know” for their network stack, DNS resolver, etc. almost every question for dig, cat /etc/resolv.conf descended into “what’s that?” or completely incorrect commands (e.g. resolving a http url in dig). I could not figure out what the person was running, the person themselves had no idea what was running (I think it was systemd-resolvd, but I still don’t know as of now). Eventually, after 3 workdays of trying to help fix this at 3-4am, I gave up. I can’t help with a personal device belonging to somebody that has no idea what they’re doing.
As for why I’m mentioning this story: switching to Linux wouldn’t help this lady with her problem. There are similar issues on linux that would prevent a login or a graphical session (there was an old work machine that ran VLC, where VLC threw GBs worth of QT errors, eventually causing systemd to crash on reboot when the drive was full). The problem here isn’t just the system, it’s the user. A lot of people seem to be allergic to providing more details than “it’s not working”, “I don’t know” and “I didn’t try anything”. If the general mindset is “I don’t know what’s wrong with no details”, there’s no savings the user from technical problems.
On a side note for “why the hell did I knowingly volunteer to set up a web server for someone else”: the whole project was already 5 months overdue. It was beneficial for everyone for the server to be up asap. Said person in charge didn’t think of anything (dns, hosting, software stack) other than ask a bunch of CS college students to design a Web app for her. Needless to say the students bailed on her (which is probably the best scenario? In terms of maintainability and security concerns). It also only took me 2 weeks to set everything up (lamp stack, K3S, crowdsec, openappsec, wireguard, etc)


What’s your emergency “break glass” policy?
Is it a bottle of whiskey?


I agree that matrix is a slow and buggy hot mess, but its issues mainly lie with scaling. As long as your instance is small it works well enough. Imo this is architectural and will never be fixed with synapse.
As for no alternatives for discord. I think the problem is that people have come to expect a certain level of QoS with hosted services that are expensive to maintain for hobbyists (cdn, load balancing, nat traversal, ddos protection, etc). I think this is fundamental to how we’re abusing IP when it’s way past its prime and on life support using middle boxes. If we want to reclaim this space, the best way forward would be something like NDN, but the transition would be astronomical that nobody wants to do it.
Our minds like to process entities/companies like Google as human beings, which allows us to assign emotions to these things. But the truth is, they are nothing but a glorified Chinese room experiment.
People made the largest browser engine and operating system, not Google. Without people, the company is nothing. A company like Google is nothing but a set of self operating rules.
I love/loathe Google just as much as I love/loathe my weekly /tmp cleaning cron job. Even if it accidentally nukes my files, it’s just doing as it’s designed to do.
You design a system to maximize shareholder value, it will do exactly that without caring a single thing about human ethics.


Anyways, I’m trying to get people in specific vulnerable communities to switch to matrix. But the amount of people refusing to do so out of convenience (and even refusing to setup MFA or using different passwords for their online accounts, including discord) is staggering.


stares at 300 line shell script+ansible mess that updates/sets up Forgejo, Nextcloud, ghostcms
“Yes… It’s automated”


Exactly. Unless you are actively doing maintenance, there is no need to remember what DB you are using. It took me 3 minutes just to remember my nextcloud setup since it’s fully automated.
It’s the whole point of using tiered services. You look at stuff at the layer you are on. Do you also worry about your wifi link-level retransmissions when you are running curl?


I may be biased (PhD student here) but I don’t fault them for being as such. Ethics is something that 1) requires formal training 2) requires oversight 3) contains to are different to every person. Quite frankly, it’s not part of their training, never been emphasized as part of their training, and subjective based on cultural experiences.
What is considered unreasonable risk of harm is going to be different to everybody. To me, if the entire design runs locally and does not collect data for Google’s use then it’s perfectly ethical. That being said, this does not prevent someone else from adding the data collection features. I think the original design of such a system should put in a reasonable amount of effort in stopping that. But if that is done then there’s nothing else to blame them about. The moral responsibility lies with the one who pulled the trigger.
Should the original designer have anticipated this issue thus never took the first step? Maybe. But that depends on a lot of circumstance that we don’t know so it’s hard to predict anything meaningful.
As for the more “harm than good” analysis, I absolutely detest that sort of reasoning since it attempts to quantify social utility in a pure mathematical sense. If this reasoning holds, an extreme example would be justifying harm to any minority group as long as it maximizes benefit for society. Basically Omelas. I believe a good quantitative reasoning would be checking if harm is introduced to ANY group of people, as long as that’s the case the whole is considered unethical.


This is common for companies that like to hire PhDs.
PhDs like to work on interesting and challenging projects.
With nobody to reign them in, they do all kinds of cool stuff that makes no money (e.g. Intel Optane and transactional memory).
Designing a realtime scam analysis tool with resource constraints is interesting enough to be greenlit but makes no money.
Once released, they’ll move on to the next big challenge, and when nobody is there to maintain their work, it will be silently dropped by Google.
I’m willing to bet more than 70% of the Google graveyard comes from projects like these.


I keep hearing good things however I have not yet seen any meaningful results for the stuff I would use such a tool for.
I’ve been working on network function optimization at hundreds of gigabit per second for the past couple of years. Even with MTU-sized packets you are only given approximately 200 ns for processing (this assumes without batching). Optimizations generally involve manual prefetching and using/abusing NIC offload features to minimize atomic instructions (this is also running on arm, where atomic fetch and add in gcc is compiled into a function that does lw, ll, sc and takes approximately 8 times the regular memory access time for a write). Current AI assisted agents cannot generate efficient code that runs at line rate. There are no textbooks or blogs that give a detailed explanation of how these things work. There are no resources for it to be trained on.
You’ll find a similar problem if you try to prompt them to generate good RDMA code. At best you’ll find something that barely works, and almost always of the code cannot efficiently utilize the latency reduction RDMA provides over traditional transport protocols. The generated code usually looks like how a graduate CS student may think RDMA works, but is usually completely unusable, either requiring additional PCIe round-trips or has severe thrashing issues with main memory.
My guess is that these tools are ridiculously good at stuff it can find examples of online. However for stuff that have no examples, it is woefully under prepared and you still need a programmer to manually do the work line by line.


As much as I hate the concept, it works. However:
It only works with generalized programming. (E.g. write a python script that passes csv files) For any specialized fields this would NOT work (e.g. write a DPDK program that identifies RoCEv2 packets and rewrite the IP address)
It requires the human supervising the AI agent to know how to write the expected code themselves, so they can prompt the agent to use specific techniques (e.g. use python’s csv library instead of string.split). This is not a problem now since even programmers out of college generally know what they are doing.
If companies try to use this to avoid hiring/training skilled programmers, they will have a very bad time in the future when the skilled talent pool runs dry and nobody knows how to identify correct vs incorrectly written code.
There’s also changing from circuit to packet switching, which also drastically changes how the handover process works.
tl;Dr - handover in 5G is buggy and barely works. The whole thing of switching from one service area to another in the middle of a call is held together by hopes and dreams.
Somehow I disagree with both the premise and the conclusion here.
I dislike a direct answer to things as it discourages understanding. What is the default memory allocation mechanism in glibc malloc? I could get the answer sbrk() and mmap() and call it a day, but I find understanding when it uses mmap instead of sbrk (since sbrk isn’t numa aware but mmap is) way more useful for future questions.
Meanwhile, Google adding a tab for AI search is helpful for people who want to use just AI search. It doesn’t take much away from people doing traditional web searches. Why be mad about this instead of the other true questionable decisions Google is doing?
Nope. Plenty of people want this.
In the last few years I’ve seen plenty of cases where CS undergrad students get stumped if ChatGPT is unable to debug/explain a question to them. I’ve literally heard “idk because ChatGPT can’t explain this lisp code” as an excuse during office hours.
Before LLMs, there were also a significant amount of people who used GitHub issues/discord to ask simple application usage questions instead of Googling. There seems to be a significant decrease of people’s willingness to search for an answer regardless of AI tools existing.
I wonder if it has to do with weaker reading comprehension skills?


Agreed. Personally I think this whole thing is bs.


A routine that just returns “yes” will also detect all AI. It would just have an abnormally high false positive rate.
Not sure about GreaseMonkey, but V8 compiles JS to an IL.
Nodejs has an emit IL debugging feature to see the emitted IL code.
How much of that is cached state based on the percentage of ram available?
That was the context. The problem wa connecting to Wireshark, which more and more people are doing thanks to general awareness of VPNs.
Huh? Where in my post did I defend MS? I was there when Balmer and crew decided to sue anyone with a pulse for using Linux. I was there when the Cathedral acquired the Bazaar (and I deleted my account for it), and I am still here using Linux and BSD for every single machine I own with the exception of one. I still hold a grudge against Mr. Bill “Jump on a roller to show how fit you are” Gates, and I refuse to purchase anything from their game catalog since 2011. Hopefully with this context, you would no longer misconstrue my point as “defending Microsoft”.
Alas, normal users care about neither. The computer is just a tool that allows them to do work which allows them to put food on the table. If your assistance is just “boo hoo use Linux”. That’s not productive to them nor us. Joe Shmoe isn’t gonna care that you should save your documents as ODT instead of DOCX. They need that document working with no hassle NOW.
Case in video game modding: 1. GShade, where the developer deliberately made people’s game segfault if compiled on their own after an update 2. MultiMC, where the developer personally threatened to sue for trademark violation after packaging the application for a Linux distro 3. Bukkit, where one dev decided to DMCA and take down all instances of the project.
Outside of video games: the entire university of Maryland, which attempt to inject backdoors into the Linux kernel that was not caught until they published a paper.
Also, for the “good dudes part”: regardless of intentions, if the damage is done, the harm is done. If a suitcase falls from an airplane and kills me tomorrow, I wouldn’t care whether it was intentional or not. I would be dead.
Going back to the original blog post: there is both a user problem and a technical problem here. The technical problem “could” be fixed by switching to Linux (assuming systemd or gnome doesn’t get to it first), but the user problem can’t. Calling out anyone who points out the user problem as “corpo drone” isn’t going to make it go away.