Whether you’re really passionate about RPC, MQTT, Matrix or wayland, tell us more about the protocols or open standards you have strong opinions on!
Others have said already, but XMPP and RSS. Also, nobody mentioned NNTP yet.
I wish everything was accessible by NNTP and we had better NNTP clients. NNTP is like RSS but for forums (so, Lemmy, Reddit, or anything where you could reply to posts). Download for offline reading, read in your client, define your own formatting, sorting, filtering, your client, your rules.
If Lemmy was accessible via NNTP, I could just download all posts and comments I’m interested in and reply to them without any connection, and my replies would get synced with the server later when I connect to WiFi or something.
Back in the day I was a big Usenet fan. What’s the modern solution to the spam issue? At the time, folk wisdom was that the demise was being caused by spam, and that due to the nature of the protocol it was somehwhat unsolveable.
I also wonder to what extent activity pub is the barrier to offline use? For reddit, the Slide client had offline reading and iirc posting. I have been disappointed it isn’t available for Lemmy. My guess has been it simply isn’t a priority for the devs. Maybe eventually we will get it.
I think it would be cool if RSS got put into Lemmy clients. Example you could make a unified inbox for all accounts by automatically getting the private RSS for incoming messages for all logged in accounts. I have manually set this up a couple of times but its tedious. Completely lacks smoothness when it comes to clicking a link, replying etc. But a client could add a little finesse to fix that.
True, Lemmy (and activitypub in general) could integrate RSS and also be accessible via NNTP.
Or at least add some functionality for offline reading/posting. It’s just not a priority for devs now.
About spam, most of spam was coming from Google groups and since Google unpeered from Usenet, there is no spam.
Probably it would be better to edit my comment, but I’ll go with a reply to myself.
To all fans of RSS: there’s this service called FeedBase that is essentially a RSS to NNTP gate. You add your RSS feed to that and it becomes a newsgroup on their server, and you can subscribe to it using any NNTP client. New articles appear as new posts in that newsgroup and you can post your own replies to them. So, you get RSS but with discussions or comments.
If you try this, let me know what RSS feeds you’re reading, so we could read the articles together and have some discussion there!
P.S. This comment is not an ad. I genuinely love feedbase and use that myself.
Holy cow, that’s neat as hell! Thanks for sharing!
IPv6
I mean, why the hell is IPv4 still a thing??
Because ipv6 is yucky
Yeah I’m anti IPv6 so I’m not going to ever use it personally. Ipv4 is enough for me
go ahead and use that on your home network, but if you work in IT and deploy it on public networks i’m going to kick you in the nuts
removed by mod
I hear you on this! Took me a whole day to get my router to delegate IPv6 properly. I’m sure that had it been better adopted, I wouldn’t be having such a hard time.
Why should this be at the editor level? There should be a linter that applies all these stylistic formatting changes to all files automatically. If the developer’s own editing tools or personal workflow have a chance to introduce non-standard styles to the codebase, you have a deeper problem.
I want both. When I am typing code in my editor I want it to follow the styles of the project. Then when I run the linter/formatter it will fix the mistakes.
The last thing I want is to start a new
if foo {
statement and the indent is half of the indent of the if above. That would be too distracting.Why should this be at the editor level?
Because for every programming language there’ll be people using text editors, but you’ll never succeed in even creating code formatters for them all.
The greatness in this project is in aiming low and making things better through simple achievable goals.
XMPP
Why not matrix?
You’re going off-topic from the OP question :-) But to answer your new question : I do not trust Matrix enough when it comes to privacy. I know that this link is old but still. https://disroot.org/en/blog/matrix-closure
Then again I do not trust Signal that much either but sometimes compromises need to be made to get things done. With XMPP the end user can host their own server if they wish to, without meta data going to a centralized point. And video calls via XMPP and Conversations were a pleasure to use when I used it during the Covid-19 pandemic.
I’d love to see more adoption of… I2C!
Bazillions of motherboards and SBCs support I2C and many have the ability to use it via GPIO pins or even have connectors just for I2C devices (e.g. QWIIC). Yet there’s very little in the way of things you can buy and plug in. It feels like such a waste!
There’s all sorts of neat and useful things we could plug in and make use of if only there were software to use it. For example, cheap color sensors, nifty gesture sensors, time-of-flight sensors, light sensors, and more.
There’s
lmsensors
which knows I2C and can magically understand zillions of temperature sensors and PWM things (e.g. fan control). We need something like that for all those cool devices and chips that speak I2C.If you have an unused VGA port, you can use the DDC pins for I2C. Be sure to add ESD protection if you do this. An I2C isolator would be even better.
I2C is really not meant to be used over cables. It has a very limited common mode input voltage range and it can’t handle much capacitance on the bus.
Except that in the case of VGA (and DVI, HDMI, and DisplayPort) the i2c interface is intended for use over the cable. All of those ports have a pair of i2c pins and corresponding wires in their cables. The i2c interface is used for DDC/EDID which is how the computer can identify the capabilities and specifications of the attached display. DDC even provides some rarely-used control functionality. Probably the most useful of which is being able to control the brightness of the display from software. I use the ddcci module on Linux and it lets me control my desktop monitor brightness the same way a laptop would, which is great. I have no idea why this isn’t widely used.
Edit:
This i2c interface is widely used to control the lighting on modern graphics cards that have RGB lighting. We’ve spent a lot of time reverse engineering these chips and their i2c protocols for OpenRGB. GPU chips usually have more i2c buses than the cards have display connectors, so the RGB chip is wired to one of the unused buses. I think AMD GPUs tend to have 8 separate i2c buses but most cards only use 4 or 5 of them for display connectors. There is also an i2c interface present on RAM slots normally used for reading the SPD chip that stores RAM module specifications, timings, etc. This interface is also used for RAM modules with controllable RGB lighting.
I2C is a bit goofy though. As a byproduct of being an undiscoverable bus you basically just have to poke random addresses and guess what you’re talking to. The fact lmsensors i2c detection works as well as it does is a miracle. (Plus you get the neat issue where even the act of scanning the bus can accidentally reconfigure endpoints)
Yeah, the lack of proper discoverability on i2c truly sucks. You have to just poke random addresses and hope for the best to see if an i2c device exists on the bus. It’s a great standard but I wish it would get updated with some sort of plug and play autodetection feature. Standardized device PID/VID system like USB and PCI would be acceptable or a standardized register that returns a part string. Anything other than blindly poking registers and hoping you’re not accidentally overvolting the CPU or whatever because the register on your expected device overlaps with the overvolt the CPU register on the same address of a different device.
I’m curious. There was some i2c connected memory devices before. Is there some forgotten spec that allows for a flexible device lookup / logging capability. Something that acts like device tree but stays specific to the bus. It wouldn’t be practical for a lot of applications but I could see it being useful for some niche stuff.
deleted by creator
I second Matrix, though I’ve been waiting for e2ee direct p2p (the Dendrite project) do be worked on for a while. Having something like that, that’s truly decentralized while secure and hiding metadata where possible, would be a dream.
Apparently dendrite is just on maintenance due to insufficient funds. It was what i set up on a test instance because it is lighter, etc. Go figure.
Conduit might be an option. It’s still under development. It’s also lightweight due to Rust (instead of Python as in Synapse).
Yeah I’ve been following that. It seemed at the time the project didn’t implement nearly all the specs as dendrite which was still lagging synapse.
Might take another look though. I really did want to use it since it was written in rust. Seemed it should probably be more performant, everything else being equal.
I love i2p. I wish it had more adoption / was easier to use.
Remember SOAP? Remember XML-RPC? Remember CORBA?
Those were not very good.
I’ve worked with all of them and hate all with a passion. SOAP wasn’t bad in theory but lots of APIs and clients didn’t implement it properly.
RSS (RDF Site Summary or Really Simple Syndication) It is in use a fair amount, but it is usually buried. Many people don’t know it exists and because of that I am afraid it will one day go away.
I find it a great simple way to stay up to date across multiple web sites the way I want to (on my terms, not theirs) By the way, it works on Lemmy to :)
Honestly there is rarely a blog I want to follow that doesn’t have it. I do think it would be great to have more readers using it so that it becomes more significant, but for my reading it is actually pretty great.
i wish all the big players would agree on one of the many open chat and IM protocols. it’s like kindergarten where the toddlers don’t want to share toys
deleted by creator
Persistent object ooze prevention? Yes, that’s a solved problem.
Can you please explain what this is?
They are humorous IETF standards published on 1 April over the years. These are specifically about implementing internet protocols using carrier pigeons instead of more traditional media like wires or optical fiber.
Look at the date of the linked RFC documents…
OpenTelemetry everywhere please
PGP/GPG. I would like to see the web of trust take off. Also I love the aesthetic for anything that’s been signed, and would like to see blog posts everywhere be nested by long blocks of random symbols.
key signing and web of trust is pretty cool but i’m somewhat opposed to it on a fundamental level. Let me decentralize my shit and mind my own business if you feel what i mean.
Anything that’s relatively centralized identity wise is not something i’m a huge fan of right off the hop.
Let me decentralize my shit…
Isn’t that why it’s a web of trust, and not a center of trust? I think you might be confusing that with public key infrastructure.
Also, you can’t decentralize your shit without a second party. That’s kind of the point.
Isn’t that why it’s a web of trust, and not a center of trust?
yes, but it’s still a trust, i don’t consider that to be fully decentralized. It serves a purpose don’t get me wrong, but i won’t be signing my online profiles using WoT keys anytime soon.
The web makes it decentralized, which is accurate, though i tend to use decentralize way more aggressively on a level local to me. I suppose it’s probably more dis-integrated, than anything. But whatever.
I wish people used email for chat more. SMTP is actually a pretty great protocol for real time communication. People think of it as this old slow protocol, but that’s mostly because the big email providers make it slow. Gmail, by default, waits ten seconds before it even tries to send your message to the recipient’s server. And even then, most of them do a ridiculous amount of processing on your messages that it usually takes several seconds from the time it receives a message to the time it shows up in your account.
There’s a project called Delta Chat that makes email look and act like a chat app. If you have a competent email service, I think it’s better than texting. It doesn’t stomp on the images you send like SMS and Facebook do, everyone has it unlike all the proprietary services, and you can run your own server for it that interacts with everyone else’s servers.
Unfortunately, Google, Microsoft, etc all block you if you try to run your own server “to protect against spam”. Really, I’m convinced that’s just anticompetitive behavior. The fewer players are allowed to enter the email market, the less competition Gmail and Outlook will have.
As much as I like ProtonMail too, unfortunately their encryption models prevents it from working with Delta Chat. I’d love to see Proton make a compatible chat app that works with their service.
I made an email service called Port87 that I’m working on making compatible with Delta chat too. I’d love to see people using email the way it was originally meant to be used, to talk to each other, without being controlled by big businesses.
The delay is there because email has no deletion support.
And a host of other shortcomings.
I’d rather we replaced email with matrix
If you’re relying on the remote server to delete something, you can’t trust it no matter what protocol you’re using.
For a regular email, the chance to undo might be fine, but for real time communication, it’s just an unnecessary road block.
Maybe if it was optional per recipient, or per conversation, or better yet, depending on the presence of a header, it might be fine. Gmail only supports all-on or all-off.
If you’re relying on the remote server to delete something, you can’t trust it no matter what protocol you’re using.
I mean yeah I wouldn’t bet my life on it, but for the 99% of regular communication it’s fine. That’s no reason to not have it in the protocol and muck around with 10 second delays instead.
Oh, another awesome thing about email is that you can ensure that your address is always yours, even if you use an email service provider like Gmail. Any provider that supports custom domains will allow you to use your own domain for your address, then if you want to change your provider, you keep your address. So, since I own hperrin.com, I can use the address me@hperrin.com, and I know it’ll always be mine as long as I pay for that domain.
This is a much better model than anything else. Even on the fediverse, you can’t have your own address unless you run your own instance.
If your email service provider goes out of business or gets sold off (skiff.com, anyone?), as long as you’re on your own custom domain, your address is still yours.
I’m working on custom domains for Port87. It’s definitely a feature I think every email provider should offer.
SMTP is a terrible protocol. Text based for sending effectively binary data with complex header wrapping and “generate a random delimiter” framing. We really need a HTTP/2 of SMTP.
That being said I agree that it exists and works. The biggest blocker to more IM-style communication is largely the UI and user expectations. I have no problem having quick back-and-forths over email but most people don’t expect it.
Fair enough. Sending binary data over SMTP adds a lot of overhead, because it all has to be encoded. We should fix that.
Honestly my biggest complaint is header wrapping. Technically you need to wrap lines at 998 bytes (not that any reasonable server actually cares). But in order to wrap a header you need to add spaces (because you can only break a line after whitespace). But where spaces are unimportant depends on each specific header. So you need to have custom wrapping rules for each header.
In practice no one does this. They just hope that headers naturally have spaces or break them in random locations (corrupting them) because the protocol was too stupid.
Binary protocols are just so much simpler. Give the length, then the data. Problem solved. Maybe we could even use a standard format for structured headers. But that would be harder to do while maintaining backwards compatibility.
IOT devices shouldn’t connect to wifi. ZWave or zigbee is much better suited to IOT stuff, but it seems to mostly get adopted in very limited, locked down proprietary shit like Hue Lights.
Isn’t Matter supposed to solve this issue?
There’s only one case I’ve found where Wi-Fi use seems acceptable in IoT: ESPHome. It’s open-source firmware for microcontrollers that makes DIY IoT sensors and controls accessible over LAN without phoning home to whatever remote server, without trying to make anything accessible over the Internet, and without breaking in any way if the device has no route to the Internet.
I still wouldn’t call Wi-Fi use ideal even there; mesh can help in larger homes and Z-Wave/Zigbee radios tend to be more power efficient, though ESP32 isn’t exactly suited for a battery-powered device that’s expected to run 24/7 regardless.
Markdown. Its only in tech-spaces that its preferred, but it should be used everywhere. You can even write full books and academic papers in markdown (maybe with only a few extensions like latex / mathjax).
Instead, in a lot of fields, people are passing around variants of microsoft word documents with weird formatting and no standardization around headings, quotes, and comments.
Man, I’ve written three novels plus assorted shorter form stories in markdown.
There’s a learning curve, but once you get going, it’s so fluid. The problem is that when it comes time to format for release, you have to convert to something else, and not every word processor can handle markdown. It’s extra work, but worth it, imo.
Silly question why can’t you convert markdown to PDF and pass that to publishers?
Because it isn’t doc is docx.
Publishers are pissy about such things. Even self publishing (which is what I do now), the various outlets still have limits to what they will use. Amazon accepts something like three file formats, including their own, and pdf isn’t on the list.
I could just do pdf for directly giving them away to people, but even then, epub is usually a better pick in terms of readability since that’s the standard for actual books since ereaders tend to display it better than pdfs. Most people reading books via files would be using something that can give a better experience with epub vs pdf.
For sure, I bet full fledged editors like word don’t even let you import it.
Not correctly, no. Librewriter does a bit better, but still misses some bits
Markdown is awesome, I agree! I did not realize you could extend markdown with anything other than html. The html extension is quite nice to do anything that markdown doesn’t support natively, but I wish there was an easier way to extend markdown. Maybe the ones you listed are what I need.
Hedgedoc / hackmd support a good amount of extensions out of the box. I think typora and obsidias do also (but not open source).
Depends on the type of book. Since you need HTML for all non default styles. Therefore, it raises the bar… you need a bit of web dev knowledge which removes the biggest benefit of markdown: simplicity / ease of use.
I agree 💯