For real, it almost felt like an LLM written article the way it basically said nothing. Also, the way it puts everything in bullet points is just jarring to read.
Hi! I am Creesch, also creesch on other platforms :)
- 0 Posts
- 15 Comments
Creesch@beehaw.orgto
Programming@programming.dev•How common is it to code review like this?
3·2 years agoIt depends on the platform you are using. But, for platforms like github and gitlab there are extensions for popular IDEs and editors available that allow you to review all changes in the editor itself.
This at the very least allows you to simply do the diffing in your own editor without having to squash or anything like that.
Creesch@beehaw.orgto
Technology@beehaw.org•Frequent/Long-Term use of the Apple Vision Pro may rewire our brains in unexpected ways
18·2 years agoLong term wearing of vr headsets might indeed be not all that good. Though, the article is light on actual information and is mostly speculation. Which for the Apple Vision Pro can only be the case as it hasn’t been out long enough to conduct anything more than a short term experiment. So that leaves very little data in the way of long term data points.
As far as the experiment they did, there was some information provided (although not much). From what was provided this bit did stand out to me.
The team wore Vision Pros and Quests around college campuses for a couple of weeks, trying to do all the things they would have done without them (with a minder nearby in case they tripped or walked into a wall).
I wonder why the Meta Oculus Quests were not included in the title. If it is the meta Quest 3, it is fairly capable as far as pass through goes. But, not nearly as good as I understand the Apple Vision Pro’s passthrough is. I am not saying the Apple Vision Pro is perfect, in fact it isn’t perfect if the reviews I have seen are any indicator. It is still very good, but there is still distortion around edges of vision, etc.
But given the price difference between the two I am wondering if the majority of the particpants actually used Quests as then I’d say that the next bit is basically a given:
They experienced “simulator sickness” — nausea, headaches, dizziness. That was weird, given how experienced they all were with headsets of all kinds.
VR Nausea is a known thing even experienced people will get. Truly walking around with these devices with the distorted views you get is bound to trigger that. Certainly with the distortion in pass through I have seen of Quests 3 videos. I’d assume there are no Quests 2 in play as the passthrough there is just grainy black and white video. :D
Even Apple with all their fancy promo videos mostly shows people using the Vision pro sitting down or in doors walking short distances.
So yeah, certainly with the current state of technology I am not surprised there are all sorts of weird side effects and distorted views of reality.
What I’d be more interested in, but what is not really possible to test yet, is what the effects will be when these devices become even better. To the point where there is barely a perceivable difference in having them on or off. That would be, I feel, the point where some speculated downsides from the article might actually come into play.
Creesch@beehaw.orgto
Science@beehaw.org•A peer reviewed journal with nonsense AI images was just published
1·2 years agoWould you like me to quote every single one of your lines, line by line, and respond to them?
No, that’s not really what I’m asking for. I’m also not looking for responses that isolate a single sentence from my longer messages and ignore the context. I’m not sure how to make my point any clearer than in my first reply to you, where I started with two bullet points. You seemed to focus on the second, but my main point was about the first. If we do want to talk about standard behavior in human conversation, generally speaking, people do acknowledge that they have heard/read something someone said even if they don’t respond to it in detail.
Again, I’ve been agreeing that AI is causing significant problems. But in the case of this specific tweet, the real issue is with a pay to publish journal where the peer review process is failing, not AI. This key point has mostly been ignored. Even if that was not the case, if you want to have any change of trying to combat the emergence of AI I think it is pretty reasonable to question if the basic processes in place are even functioning in the first place. Where my thesis (again, if this wasn’t a pay to publish journal) would be that this is likely not the case as in that entire process clearly nobody looked closely at these images. And just to be extra clear, I am not saying that AI never will be an issue, etc. But if reviewing already isn’t happening at a basic level how are you ever hoping to combat AI in the first place?
When did anyone say
But by just shouting, “AI is at it again with its antics!” at every turn instead of looking further and at other core issues we will only make things worse”
The context of this tweet, saying “It’s finally happened. A peer-reviewed journal article with what appear to be nonsensical AI-generated images. This is dangerous.”, does imply that. I’ve been responding with this in mind, which should be clear. It is this sort of thing I mean when I say selective reading when you seemingly take it as me saying that you personally said exactly that. Which is a take, but not one I’d say is reasonable if you take the whole context into account.
And in that context, I’ve said:
that doesn’t mean all bullshit out there is caused by AI
Which I stand by. In this particular instances, in this particular context AI isn’t the issue and somewhat clickbait. Which makes most of what you argued about valid concerns. Youtube struggling, SEO + AI blog spam, etc are all very valid and concerning of AI causing havoc. But in this context of me calling a particular tweet clickbait they are also very much less relevant. If you just wanted to discuss the impact of AI in general and step away from the context of this tweet, then you should have said so.
Now, about misrepresenting arguments:
If you are reaffirming somebody else’s comment, you are generally standing behind most if not all of what they said. But nobody here is saying or doing the things you are claiming. You are tilting at windmills.
Have you looked back at your own previous comments when you wrote that? Because while have this, slightly bizarre, conversation I have gone back to mine a few times. Just to check if I actually did mess up somewhere or said things differently that I thought I did. The reason I am asking is that I have been thrown a few of these remarks from you where I could have responded with the above quote myelf. Things like “It’s passing the buck and saying that AI in no way, shape, or form, bears any responsibility for the problem.”
Creesch@beehaw.orgto
Science@beehaw.org•A peer reviewed journal with nonsense AI images was just published
1·2 years agoThe fact that you specifically respond to this one highly specific thing. While I clearly have written more is exactly what I mean.
shrugs
Creesch@beehaw.orgto
Science@beehaw.org•A peer reviewed journal with nonsense AI images was just published
3·2 years agoI feel like this is the third time people are selective reading into what I have said.
I specifically acknowledge that AI is already causing all sorts of issues. I am also saying that there is also another issue at play. One that might be exacerbated by the use of AI but at its root isn’t caused by AI.
In fact, in this very thread people have pointed out that *in this case" the journal in question is simply the issue. https://beehaw.org/comment/2416937
In fact. The only people likely noticed is, ironically, the fact that AI was being used.
And again I fully agree, AI is causing massive issues already and disturbing a lot of things in destructive ways. But, that doesn’t mean all bullshit out there is caused by AI. Even if AI is tangible involved.
If that still, in your view, somehow makes me sound like an defensive AI evangelist then I don’t know what to tell you…
Creesch@beehaw.orgto
Science@beehaw.org•A peer reviewed journal with nonsense AI images was just published
2·2 years agoI said clickbait about the AI specific thing. Which I do stand by. To be more direct, if peer reviewers don’t review and editors don’t edit you can have all the theoretical safeguards in place, but those will do jack shit. Procedures are meaningless if they are not being followed properly.
Attributions can be faked, just like these images are now already being faked. If the peer review process is already under tremendous pressure to keep up for various reasons then adding more things to it might actually just make things worse.
Creesch@beehaw.orgto
Science@beehaw.org•A peer reviewed journal with nonsense AI images was just published
4·2 years agoI feel like two different problems are conflated into one though.
- The academic review process is broken.
- AI generated bullshit is going to cause all sorts of issues.
Point two can contribute to point 1 but for that a bunch of stuff needs to happen. Correct my if I am wrong but as far as my understanding of peer-review processes are supposed to go it is something along the lines of:
- A researcher submits their manuscript to a journal.
- An editor of that journal validates the paper fits within the scope and aims of the journal. It might get rejected here or it gets send out for review.
- When it does get send out for review to several experts in the field, the actual peer reviewers. These are supposed to be knowledgeable about the specific topic the paper is about. These then read the paper closely and evaluate things like methodology, results, (lack of) data, and conclusions.
- Feedback goes to the editor, who then makes a call about the paper. It either gets accepted, revisions are required or it gets rejected.
If at point 3 people don’t do the things I highlighted in bold then to me it seems like it is a bit silly to make this about AI. If at point 4 the editor ignores most feedback for the peer reviewers, then it again has very little to do with AI and everything the a base process being broken.
To summarize, yes AI is going to fuck up a lot of information, it already has. But by just shouting, “AI is at it again with its antics!” at every turn instead of looking further and at other core issues we will only make things worse.
Edit:
To be clear, I am not even saying that peer reviewers or editors should “just do their job already”. But fake papers have been increasingly an issue for well over a decade as far as I am aware. The way the current peer review process works simply doesn’t seem to scale to where we are today. And yes, AI is not going to help with that, but it is still building upon something that already was broken before AI was used to abuse it.
Creesch@beehaw.orgto
Science@beehaw.org•A peer reviewed journal with nonsense AI images was just published
6·2 years agoI totally see why you are worried about all the aspects AI introduces, especially regarding bias and the authenticity of generated content. My main gripe, though, is with the oversight (or lack thereof) in the peer review process. If a journal can’t even spot AI-generated images, it raises red flags about the entire paper’s credibility, regardless of the content’s origin. It’s not about AI per se. It is about ensuring the integrity of scholarly work. Because realistically speaking, how much of the paper itself is actually good or valid? Even more interesting, and this would bring AI back in the picture. Is the entire paper even written by a human or is the entire thing fake? Or maybe that is also not interesting at all as there are already tons of papers published with other fake data in it. People that actually don’t give a shit about the academic process and just care about their names published somewhere likely already have employed other methods as well. I wouldn’t be surprised if there is a paper out there with equally bogus images created by an actual human for pennies on Fiverr.
The crux of the matter is the robustness of the review process, which should safeguard against any form of dubious content, AI-generated or otherwise. Which is what I also said in my initial reply, I am most certainly not waving hands and saying that review is enough. I am saying that it is much more likely the review process has already failed miserably and most likely has been for a while.
Which, again to me, seems like the bigger issue.
Creesch@beehaw.orgto
Science@beehaw.org•A peer reviewed journal with nonsense AI images was just published
17·2 years agoThis feels like clickbait to me, as the fundamental problem clearly isn’t AI. At least to me it isn’t. The title would have worked as well without AI in the title. The fact that the images are AI generated isn’t even that relevant. What is worrying is that the peer review process, at least for this journal clearly is faulty as no actual review of the material took place.
If we do want to talk about AI. I am impressed how well the model managed to actually create text made up of actual letters resembling words. From what I have seen so far that is often just as difficult for these models as hands are.
Creesch@beehaw.orgto
Technology@beehaw.org•YouTube will now show a blank homepage if you don’t have watch history on
1·2 years agoIt still does? That is an entirely different page and still shows the newest videos of channels you are subscribed to. At least, for me it does.
Creesch@beehaw.orgto
Technology@beehaw.org•No apologies as Reddit halfheartedly tries to repair ties with moderators
19·2 years agoThis is such a cynical take. Contrary to popular belief, the vast majority of moderators do care about their subreddits or else they wouldn’t be volunteering their free time. The allure of the power to remove some random person’s post on the Internet, or to ban them just so they return with another account, pales in comparison to the thrill of watching your community grow and people having fun because of it. And it’s not this weird selfish, hey-look-at-me-I’m-so-successful kind of thrill, it’s like you joined this thing because you are interested it and now all these other people who are also interested in it are there talking about it. That’s what’s cool, you set off to make this place where people can talk about this thing that you think is cool and you get to watch it grow and be successful over time. Some of these communities have been around for over a decade, so, people have invested time and effort into them for over a decade.
Moving to elsewhere isn’t really as easy as people make it out to be. At the moment “moving communities” means fracturing your community as there is no unified approach to doing this.
The operative word being “unified” which is next to impossible to achieve. If you get all mods to agree you will have a hard time reaching all your users. This in itself presents the biggest roadblock, ideally you’d close up shop and redirect users to the new platform. Reddit will most certainly not allow this, their approach to protesting subreddits that were not even aiming to migrate made that abundantly clear.
So this means that, at the very least, you are looking at splitting your community over platforms. This is far from a unified approach.
This isn’t even touching on the lack of viable long term platforms out there. I’d love for people to move to Lemmy. But realistically speaking Lemmy is very immature, instance owners are confronted with new bugs every day, not to mention the costs of hosting an instance. That also ignores the piss poor state the moderation tooling is in on Lemmy. The same is true for many of the possible other “alternatives”.
All the new attention these platforms have gotten also means they are getting much more attention from developers. So things might change in the future for the better, in fact I am counting on it. But that isn’t the current state of the fediverse. Currently most of the fediverse, specifically Lemmy is still very much in a late Alpha maybe early Beta state as far as software stability and feature completeness goes.And, yes, the situation on reddit is degrading and this latest round of things has accelerated something that has been going on for a while. But at the same time Reddit is the platform that has been around for a decade and where the currenty community is. Picking that up and moving elsewhere is difficult and sometimes next to impossible. I mean we haven’t even talked about discoverability of communities for regular users.
Lemmy (or any fediverse platform) isn’t exactly straightforward to figure out and start participating in. If you can even find the community you are looking for. Reddit also hosts a lot of support communities, who benefit from reddit generally speaking having a low barrier of entry. Many of those wouldn’t be able to be as accessible for the groups they are targeting on other platforms.
Creesch@beehaw.orgto
Technology@beehaw.org•Reddit removes years of chat and message archives from users' accounts
4·2 years agoWhat is much more likely is that you didn’t delete all your comments and posts. Reddit “listings” only go back 1000 items and if you delete an item from there older items don’t pop into view. So if you deleted all comments visible from your profile and you had posted more than 1000 comments you effectively never deleted all your comments.
Creesch@beehaw.orgto
Free and Open Source Software@beehaw.org•Stop Using Discord for Your Open source Communities | mattcen's mumblings
1·2 years agodeleted by creator
Creesch@beehaw.orgto
Free and Open Source Software@beehaw.org•Stop Using Discord for Your Open source Communities | mattcen's mumblings
0·2 years agoIf you were trying to manage a server with 2k active users 7 mods isn’t all that much. Assuming for a moment this was a little while ago (discord did release some pretty nifty mod tools over the last year or so) and you had not set anything up in regards to third party bots.
With the newest discord modtools in addition with third party bots discord is in my experience very good to manage for a chat platform. Certainly much easier than IRC ever was and still is for that matter.
This is already a thing, there are a myriad of LLM chat interfaces where you can either connecting to models you are locally running or connect to APIs of providers. “Open WebUI”, “librechat” and “Big-AGI” are web interfaces. On desktop you have things like jan.ai and a a lot more.