hckrnws
Brits are scrolling away from X and aren't that interested in AI
by LinuxBender
So the actual breaking news here is "Brits actually fairly normal after all"?
Neither X nor AI is particularly relevant to the average person, British or not, unless they're terminally online and/or the kind of person to write unhinged nonsense on LinkedIn.
Sure quite a few people might use ChatGPT or whatever sometimes, but they use Excel too and they're probably not especially "interested" in that either.
> Neither X nor AI is particularly relevant to the average person, British or not, unless they're terminally online and/or the kind of person to write unhinged nonsense on LinkedIn.
According to the article X reached 26 million UK adults in May 2022. That's half of the UK adult population.
That's a pretty meaningless figure without the methodology used to come up with it.
It might just mean 26 million uk adults have heard of a tweet. Think covfefe, barely anyone read that tweet and yet it reached everyone.
Down to 22.1 million now though, I think is the point.
But honestly, twitter never forms much of the conversation outside of media types.
People would of course talk about the information found on the platform, not the platform itself.
The average person on AI:
"Forty-three percent had used one for work" … "The numbers are slightly different for the under-16s. Fifty-four percent said they had used a GenAI tool, with more than half (53 percent) of those saying they had used it for schoolwork."
78% of people aged 16-24 had used a GenAI tool in the last year. That's a remarkable adoption rate IMO. We're talking about this technology as if it's the future, but it has already transformed secondary and post-secondary education beyond all recognition.
https://www.ofcom.org.uk/siteassets/resources/documents/rese...
> 78% of people aged 16-24 had used a GenAI tool in the last year.
More interesting would be the percentage of people that have used GenAI in the past year with a purpose.
I suspect a significant number have just played with it. Likely many didn't make it a habit, let alone ever do anything useful with it. It would be a bit of a stretch to say anyone in this group has 'adopted' AI.
> More interesting would be the percentage of people that have used GenAI in the past year with a purpose.
Also how many would continue to use genAI if it weren't artificially cheap/free. Remember that nearly all AI companies are losing money hand over fist, there's bound to be user attrition when the walls close in and they have to start shoving in ads or aggressively upselling paid subscriptions.
"Write an essay about X" is a purpose.
If we keep testing young people in AI-friendly tasks, they will keep using AI to a larger rate than the rest of us.
This! Young people are just using it to do school work. It’s like having a free homework completer
Please go talk to a teacher. Almost everyone is submitting AI generated stuff, everywhere.
The big question, really, is whether they _regularly_ use it. I’ve used ChatGPT… precisely once, as a novelty.
This is one of the most interesting things about AI to me.
As a lifelong computer nerd I've always dreamed of having a computer friend who knows all about my life and who I can bounce ideas off, it's a core sci-fi trope. But I tried these new LLMs everyone was so hype about and... nah, they all feel like I'm wasting my time. Anything I want to actually know I can look up myself faster than negotiating with these things to try guess what they want me to tell them so they will actually do what I wanted. And as far as just being a nice personality to spend my time with goes, they all come across to me as awkward or pretentious or fussy, and not what I want my computer friend to sound like. If I need to spend 20 minutes trying to "train" my computer friend how to sound vaguely like a real person, it's a fail, sorry. It feels like trying to socialize on Reddit or Discord, you waste a bunch of time going back and forth with strangers who are trying to be funny or clever but never actually say anything meaningful. Which is why I don't use those sites either.
But then I have real life friends who have mentioned they talk to these chatbots all the time, almost like a therapist, and they ask them questions about their lives. Obviously they don't take the advice as seriously as they might coming from a professional, but they still consider it a valuable interaction. To them, having this kind of stilted interaction with a hokey-sounding AI is worth something. And I don't get what they are seeing that I am missing. Perhaps my standards are too high? Or perhaps they just have a greater tolerance for wasting time?
Anyway, when surveys ask me if I've used AI, I always say yes. The worst surveys then do not ask a follow-up question of "did you think it was actually useful/compelling/fun", but of course they don't because the whole point of them is to build hype to sell the gimmick to another company who will shoehorn it into whatever.
i hope this is ai generated...
but anyhow, the part about the lame personality (they are prompted as branding now) were the main focus of the current AI renascence, which everyone likes to lie to themselves started with attention paper, when in reality is started with a new wave of companion bots, which ironically were praise for adapting their personality really well.
There’s just so much wrong here.
From statements to spelling.
keep telling yourself your new fad career wasn't born out of channers love bots. but we both know the truth.
I see teenagers chatting with it to study while taking the bus. That's not some idle one time experiment, that's using it every day type of behaviour.
Why only once?
It’s not something I’d find useful, and it wasn’t particularly amusing.
I will confess to enjoying bad AI _image_ generation, mind you. Look at the images on this website! https://theonlineaccountants.uk/file-company-accounts.htm
AI generated text generally just makes me cringe, tho.
> transformed secondary and post-secondary education beyond all recognition
"Beyond all recognition" seems like quite an extraordinary claim.
I didn't have a teacher growing up that I could talk to 24/7, that would answer all my questions, no matter how ridiculous. with all of the digitized human knowledge at their fingertips, and the patience of a saint for every last one of my dumb questions.
> 78% of people aged 16-24 had used a GenAI tool in the last year. That's a remarkable adoption rate IMO. We're talking about this technology as if it's the future, but it has already transformed secondary and post-secondary education beyond all recognition.
Your wording choice there is excellent as "transforming" can either be extremely positive, or negative, or just different. Given how many students now, judging by various outcries on social media by teaching staff are borderline illiterate in middle school, it does make one wonder what the long term effects of this will be.
It looks like the transformation is going to be that all marked work will be done on paper in class. Anything else is pointless since students just have all their work generated for them.
Standardized test scores were marginally improving for decades, although they tanked during Covid
Google is so terrible that you either have to use AI to filter it or pay for a search engine that doesn't suck.
The web in general sucks for quickly finding information. from the essays that proceed a recipe or a news site which autoplays an irrelevant video on every story. It not only wastes time it waste your phone data. The ChatGPT search feature has eliminated the junk and gives me the info I’m looking for.
[dead]
This makes sense. Since googling something now counts as “using a genai tool” everybody uses and cares about AI
> Forty-three percent had used one for work
That's honestly pretty low considering the sheer number of products that now have AI integration. I honestly think a lot of the hype around AI taking over jobs is because the people who are spreading it are more likely working in some pretty dull coding environments where copilot can shine, because you just plain don't write a ton of interesting or otherwise off-the-beaten-path code, and that's what copilot excels at. And then those people have spare time at work to evangelize about this great technology.
And before anybody starts with me, yes I have tried it. It's fine. A lot of what I work on isn't standardized or boilerplate enough to where copilot can really help me, and the situations where it can, I found the time I spent describing what I wanted, getting the answer back, copying it into my IDE and then customizing it as required was frankly just better spent writing it myself. Maybe it made me SLIGHTLY faster? But nothing approaching what I would suggest the term "coding AI" implies, and frankly, I enjoy writing the easier code on occasion because it's a nice break from the harder stuff that copilot can't touch.
Like if you're a freelancer who jumps between gigs of refactoring ancient websites into something vaguely web 2.0, you would probably get a lot of mileage from copilot where you're just describing something as written (or hell, giving it the existing code) and asking for it to be rewritten. But if you're doing something novel, something that hasn't been posted on StackOverflow a thousand times, you will run into it's limitations quite quickly and, if you're like me anyway, resign it to the bin because fundamentally asking it to make something, finding out it can't, and then making it myself is FAR more annoying than just assuming it can't and moving on.
Some people are being pressured by their management to use AI to increase efficiency in their jobs, with varying degrees of success.
I have a friend who works in logistics for GE and they’re getting training on the basics of GenAI and then they have to go out and find ways to integrate it into their workflow. The problem is that management isn’t doing the legwork to understand how to integrate the tooling, they’re just handing that responsibility off to the actual users. Those people wind up complaining that taking time to integrate these tools winds up slowing them down and they struggle to find meaningful applications for the LLMs.
It’s like management is saying “here’s a new hammer, we don’t know how to use it but the guy who sold it to us convinced us you can figure out how to use it. So go out and do it and be better and faster at your job, good luck”.
I'm running into this now. Boss is all but mandating everyone use Cursor.
I'm not anti-AI. I don't mind asking an LLM to write me some boilerplate. But I don't want to change my tooling, nor have an integrated assistant. My output is fine.
I'll hold out as long as I can, but it feels like this may one day just become the reality.
Hype cycles come and go, AI will be no different. The core technology may remain and advance, but the fever will break once reality sets in/and or prices continue to climb.
Gonna be fun to see the results of cases like "I didn't defraud the company, gen AI did!" Of which I suspect some will actually be fraud and some not but I don't envy the lawyer who has to explain neuronal computation to a nontechnical person.
Getting Claude to extract numbers from the image of a spreadsheet someone inexplicably sent you instead of the actual spreadsheet is not the same as "being interested in AI"
From my experience most people who use anything other than ChatGPT/c.ai are somewhat interested in AI
I’m interested in ai but still haven’t used it. When I finally focus on it I will try kagi’s ai. I already subscribe to search.
> I’m interested in ai but still haven’t used it
There's several free options, so what's stopping you?
Hard to say.
Here are a few things keeping Me from looking into it.
I might feel so entrenched in my process that I can’t figure out where to start.
The current crop of offerings only interest me from a self hosted point of view. I’m interested in an ai that is aligned with closely to my own world view. If there is a bias built in I would rather be generated by me.
When this ai wave first started the story was it was so complex the origin of the answers was considered in unauditable. Which I took as deceptive or the pushers didn’t understand what they were doing.
When I stated my interest it goes back to the ‘80s and I was playing with OpenCYC about 15 years ago. I keep saying I will take a look but other projects keep grabbing my interest.
I understand where you're coming from.
Good news on a few of those points, but not all of them.
> I might feel so entrenched in my process that I can’t figure out where to start.
This is the easy one. You can just tell them that, in those words, with what you're up to, and they suggest stuff back in English. (And if you're not a native English speaker, use your native language and many will respond in that language instead of in English).
> The current crop of offerings only interest me from a self hosted point of view.
My experience with current self-hosted ones is… they're not worth bothering with. (But also, my machine is limited in the model size it can use locally, so this may not impact you so much).
> I’m interested in an ai that is aligned with closely to my own world view. If there is a bias built in I would rather be generated by me.
Good news here, but for what I consider to be bad reasons. They're often wildly sycophantic, so they'll often display whatever bias they "think" you have.
> When this ai wave first started the story was it was so complex the origin of the answers was considered in unauditable. Which I took as deceptive or the pushers didn’t understand what they were doing.
It's getting slightly better, but unfortunately the problem is mostly the "don't understand", there's too many different teams for me to feel it's deceptive. If you really need auditable results, I'd stay away from them too — they're definitely not in the vein of AI that CYC ever was.
Thanks for listening and commenting in a thoughtful way. I'm currently upgrading a 20 year old sailboat. I come across parts that I don't recognize and I have thought about trying to image search what is in my hand.
This summer I was identifying every weed in my yard because every photo has that info. I also try to keep spell check turned off.
Your point about sycophantic ai is what I'm looking for. I see it as an opportunity to remove the white noise of the internet. I also understand your point about regarding the dangers.
My response is that with ai being no different than any technology.
Know thy self.
everyone who had to fill in pages of text for no particular reason they believe in, will use any means no matter how bad.
this included teens doing homework, and everyone else filling job applications. same mindset.
ai? sure. google search and adapting any template on the very first page? sure!
this might also be a demographic issue - I live in an afluent area and genai is woven already into everyday so much that it is already difficult to imagine life without it. just couple of example - my 11-year old kid does most if not all of her studying/research/… through genai. her class has a google doc where they share prompts they found useful for learning xyz… my kid asked me something last week and I said “no clue, google it” and her response was “google it??? thats funny dad…”
my homeowners association used genai to go through years of transaction to create a budget etc…
> my homeowners association used genai to go through years of transaction to create a budget etc…
… Yikes. Hope they checked it afterwards. These things are quite bad at financial stuff; they’re good at producing something which looks superficially reasonable, but when you look closer, well, it’s all nonsense.
I know, right?!
> Ofcom measured X's adult reach
I wonder if the same trend is happening for younger age groups as well. I was surprised at my two nephews scrolling X yesterday reading memes after dinner. They're 14 and 16 and I guess deeply in the "gamers" culture? They shared some of them with me and I wasn't into the edge-lord stuff, but they insist that it's just ironic usage.
Much like every generation, I'm likely just not hip enough to understand the youth.
I’m reminded of “I was a teenage edgelord” https://medium.com/@srhbutts/i-m-sarah-nyberg-and-i-was-a-te...
It's funny reading this nearly 10 year old post in 2024 and noticing that many of these tactics are now universal and some of the people mentioned (Ian Cheong) are still being called Nazis, but by people on the other side of the political divide.
Turns out that the author of that article was/is a pedophile, was using "but I was just being an edgelord/muh right-wing harassment" as a smokescreen to divert attention from her bad behavior, and (wisely) vanished from online after about 2017 when she couldn't whitewash her image anymore.
https://medium.com/@srhbuttstruth/5-reasons-you-shouldn-t-st...
You believe that an anonymously written article full of baseless speculation that relies on an anonymous Tumblr post as its source is credible?
It would benefit you to educate yourself on how to evaluate the reliability of claims you read online. Here's a tip to help you get started: anonymously written online posts that rely on other anonymous posts should be considered with a very high degree of scrutiny.
source_wojak.jpg
Did you bother to check the links in the article and tumblr post? You would have found archives of the web sites in question, with Nyberg's horrifying words dating back to 2006 preserved. The chat logs have been preserved and distributed all over the place as well; they're not hard to find.
Yes, I very obviously did because I explicitly referred to it in my comment.
Nothing in your comment changes the fact that you linked to an anonymously written article full of baseless speculation that relies on an anonymous Tumblr post that is also full of baseless speculation.
Not a single credible source in the entire mess of nonsense that you linked to. You should be embarrassed.
For anyone looking for a good alternative to Toxic Twitter/X, I can't recommend BlueSky enough.
I signed up and it's a breath of fresh air without the artificially promoted toxic posts showing up in my feed.
Only because it's small and still burning VC money. The algorithm will come once they need to pay the bills.
And then we'll move to the Next VC one! Huzzah!
Eh, if they burn VC money for another 10 years, who cares? There's another one that will pop up, and we'll continue the ride.
Bluesky is the most refreshing thing I’ve seen on the internet for a long time. The feed actually shows stuff based on my followers rather than just rage bait and another countries politics.
Just like the “following” tab on X.
That's the best part of Bluesky, I don't have to just look at my followers. I can find new people with interesting content based on the people I already follow. While X has 2 tabs, your direct followers only, and /pol/.
Indeed. And I'm finding a lot of people that I used to follow on Twitter are now on BSky as well. My feed is not nearly as sparse as I thought it would be.
It’s the hugbox you want with tooling to ensure nothing contrary ever has a chance to be heard.
Not unlike X, you only hear what you want to hear.
Pure opinion, but I feel like the UK has a strong cultural bias towards doing things the way they’ve always been done, which can make us a bit resistant to new technologies and ways of doing things.
I don't think we're resistant, just hesitant pragmatists. When something new and shiny turns up, we don't necessarily accept the marketing saying it's going to improve our lives. Best to wait for someone else to get through the early adoption pain and work out all the kinks so we don't have to waste our time on it if it turns out to be a lemon.
With this strategy I entirely missed blockchain, crypto and NFTs and am in the process of missing AI.
To think that AI is as irrelevant as blockchain, crypto and NFTs is a tragical error many are committing, sir. Those things were trivial and useless and it was clear since the start, and even if tons of money and marketing were put inside, nothing changed because of the blockchain, at least nothing useful.
AI is a completely different story: you likely not even realize you are already a heavy user just as a side effect of everything technological you use, from voice dictation, to medical, to all the images you see around. Soon also: when you are going to watch a movie. Moreover LLMs are already transforming the way people work.
These two entities, blockchian and AI, have very little in common if not the hype.
So what you're saying is that blockchain, crypto, NFTs had no application up front. Correct. I agree there.
What I am saying is that the applications of AI cannot be fulfilled to the level of the promises made. The promises were made to solicit hype to generate cash, not because the idea was viable or achievable on proof. When we reach maturity, we'll see what is left and I'll wait for that. That's fine. In the mean time I'll have to put up with cats appearing every time I search for dogs in Apple Photos and arguing with ChatGPT about its understanding of the relative magnitude of 9.9 and 9.11, while everyone tells me repeatedly with sweat on their brow that WhateverMODEL+1 will make that problem go away, which it didn't on WhateverMODEL-3,-2,-1,0. Only another $2 billion of losses and we'll nail it then!!!
The end game for all technology changes is not what we think it will be. Been in this game a long time and that is the only certainty.
I'm not sure what promises you are talking about, but I've found LLMs to be extraordinarily helpful for both my job and daily life. They are excellent at translation, summarization, troubleshooting, and brainstorming. I've used OpenAIs API to translate an entire epub, including the HTML so images are retained and the results were shockingly good after some prompt fiddling. With Claude I've received some excellent advice on decorating my living room, organizing my schedule, and quick hypotheticals. There are no pinky promises here, it already works.
For general Q&A they can hallucinate, but so long as you are using it to augment your productivity and not as a driver this isn't any different than using stack overflow, or any other kind of question you might ask on the internet. It's basically a non issue too if you upload a document into its context window and stick to asking questions about that document though.
>I'm not sure what promises you are talking about,
AI wiping out programming as a career. AI wiping out writing stories. AI replacing the need for doctors to diagnose illness. AI generally replacing all white collar jobs.
LLMs are useful assistants, but they are nowhere near the hype flooded everywhere a year or so ago.
I don’t see how they’re nowhere near the hype.
Did everyone think it would take two months and all the doctors in the world would lose their jobs to ChatGPT?
AI is a societal shift that will take place over the next 20, 30, and 40 years, much like what happened with personal computing. This is a time horizon that impacts investments right now. Professions that existed for thousands of years will cease to exist. That is an unbelievably big change.
you should celebrate they are people in this world that think like this… as long as they are around we can capitalize on this :) like the people who were still riding horses when fords started rolling around… :)
Mostly I agree, but I would add:
> AI wiping out writing stories
FWIW, I think LLMs make better stories than quite a lot of the human writers on Reddit.
Not that many of the Redditors were ever going to go on to be successful novelists of course (and I say that as someone who is struggling to finish writing this darn book for the best part of a decade now…)
Fair enough, I never really took those claims seriously but will concede that many still seem to be in that headspace even now.
Honestly, and this is not personal, I doubt your ability to determine a bad summarisation or translation outcome. My wife is a professional translator and spends a good deal of time picking up the steaming wrecks that LLMs have left after someone went for the "cheap" option first. And we're talking best of breed stuff like DeepL here.
As for the other points, I rather like to spend some time thinking on them personally. If you're not connected to the decision yourself, what are you?
> My wife is a professional translator and spends a good deal of time picking up the steaming wrecks that LLMs have left after someone went for the "cheap" option first. And we're talking best of breed stuff like DeepL here.
Just so we are clear, for Japanese to English translation, DeepL is hot garbage compared to a top class LLM with the right prompt. DeepL translations are basically unreadable, and regularly just cut sections out entirely! So I wouldn't call DeepL "best of breed" by any means, it's not even at the starting line. Can't comment on English <-> French/Spanish/German/etc though, never tried it with those.
In my case the epub was technically a replacement for a fan translation I was reading, which was decent enough, but with a simple script and instructions to keep the vibes of a light novel, it got very good, I remain impressed. Next I plan to convert it all to markdown to see if I can help encourage it to structure paragraphs properly, the html tags have so far limited it to line by line translation.
When I've experimented with officially translated works, meaning cases where I've translated the raw and compared it to the official, it's still not up to par, but good enough in my opinion. I'm not aware of any payed service that streamlines this yet though, not sure why. It's nothing like traditional MTL.
> As for the other points, I rather like to spend some time thinking on them personally. If you're not connected to the decision yourself, what are you?
What? It's a dialogue, a conversation, I bounce ideas off it and ask for advice to help guide the direction of my thinking, have you ever even used an LLM? I do this with my friends and co-workers too, do you not do this?
This comes off as a bit presumptuous, an LLM lacks executive thinking, if I'm not directing the conversation then the LLM has nothing to give.
Just to note my wife is Japanese so she's aware of that. She does German / French as well and it's fine for that. But still needs a lot of work cleaning it up.
AI and Crypto both use extraordinary amounts of electricity but at least AI actually does something and has replaced Google for me in at least 50% of searches.
Is that because AI has gotten better or because Google has gotten worse?
IMO, both.
There's a lot of stuff that LLMs can do for me that Google never could, like synthesise a new python script to solve some idea I want to iterate on.
But also, Google results nose-dived and only recently seemed to get less bad… though now it seems to be the turn of the YouTube home page to be 60 items of which 45 are bad, 14 are in my watch later list already, and only 1 is both new to me and interesting.
I’m glad my view of my country or any country isn’t so small that I would imagine everyone thinking or acting a certain way
Based on my experience growing up in the UK, then having long visits to the US and moving to Germany… the UK overall is fairly open minded to new tech and social change.
Well, so long as the monarchy and the castles remain.
I think the Brits are pretty fast at adopting technology, their digital public service infrastructure for example is excellent as is their research and engineering sector but I think they have a pretty big distaste for hucksterism or fads and a very no-nonsense attitude. Sort of like us Germans but less digitally averse and with a better bureaucracy.
I'm British but have lived in Germany for several years. I'd say the Germans are explicitly more conservative than Brits. Germany has several aspects in business and law which do, and admittedly I never lived in medieval times, literally feel like something out of some medieval guild system. "I must tithe to the Driver's Guild" is the long and the short of the entire learning to drive system here for example. And Germans just accept it, or even more pathetic, defend it. They just accept having these legally mandated wallet inspectors / guild members ambushing them every now and then for cash. Baffling.
Hard disagree. You might think that, then you see how other countries do things, and you realise we're actually pretty good at adopting new stuff. Not the best, but better than most.
You're talking about the country that started the industrial revolution.
They're talking about a country that still has Kings and an unelected senate.
Plenty of modern countries still have kings. In practice it doesn't affect democracy much, other than being another source of waste and corruption.
The House of Lords though, now that is weird.
Versus a country where many places even in big cities still expect to check a signature for payment. Lack of adoption of chip-and-pin is baffling.
People living in techno bubble are always surprised how much „normal” people are behind and how much they don’t care.
I see people who are doing white collar jobs where most of it is doing stuff on computers being absolutely not interested in any of it.
I work with generally people from all over Europe and it mostly is the same so I would not say Brits are like that but more generally people have bias towards doing things the way they’ve always been doing.
Last month our company released new interface because old one is built on unsupported tech and with all the regulations we have to change it anyway - outrage lasted 2 weeks - people are getting used to new way and in 2 months no one will remember the old way.
There's many people like me who were born and pushed for more tech and are now back pedaling. You start to see through the trends, the marketing, the manias.. and nowadays the disconnect between joy, usefulness and actual results.
Could be, but a bit of conservatism ain't gonna hurt, more conservative nations like Switzerland or nordics are doing more than fine long term, QOL is top notch globally for various reasons.
Much better than having sheepish mentality and chasing what rest of the crowd is chasing too, shows some character and thinking for oneself and not being an easy subject to manipulation. X was almost pure toxicity even before musk's ego trip and I never understood why I should care about some random brainfarts of people 'I should be following', don't people have their own opinions formed by their own experience? Thats rather poor way of spending limited time we have here, on top of training oneself in quick cheap dopamine kicks which messes up people for rest of their lives.
ChatGPT at least tries to be added value, but beyond a bit better search, hallucinating some questionable code and some random cute pics (of which novelty wears off extremely fast), I don't see it, I mean I see the potential, just not reality right now. Plus that code part - I want to keep training myself and my analytical mind, I don't want to be handed easy answers and become more passive and lazy. That's why I do git via command line tool and not just clicking around. That's why I don't mind at all doing some easier algorithms instead of having them served. My employer only wants good result, I am not working in sweat shop being paid by number of lines of code per day.
Quality life is about completely different things anyway. IMHO UK is fine in this regard.
Facts. It’s why they have no semiconductor industry, why their cars are all literally the most unreliable brands, and why they are becoming poor af compared to Americans. I’m so tired of euros trying to act smug about them not working hard. There’s a reason why the EU and the UK are fading into obscurity.
only Brits?
the set of people interested in AI seems to be quite specific: techno-optimists, fad-seeking "entrepreneurs", people who can get with low-quality outputs.
Yes because it was a survey of by a British regulatory body, the Office of Communications aka Ofcom.
There’s a lot of that on twitter for sure
Sample size was ~7300 people, polling done in May / June 2024.
Looks like various breakdowns of cohorts by age - in one section there's 7 groups.
A thousand people per group is respectable, but OTOH each responder's answers are being extrapolated to assume the behaviour of ~10,000 people.
If my rusty skills in sample calculation are correct, 7300 surveys could be enough for a 98% confidence with 1,5% error margin.
Someone please correct me if I'm wrong but I think the sample size of 7300 should be enough for UK's population.
Depends where it’s taken - online survey will be biased, ai meet-ups responders - extremely so etc.
> Wave 6 of the Online Experiences Tracker was conducted among a nationally representative sample of 7,280 UK internet users aged between 13 and 84. Fieldwork was conducted over a ten-day period between 23rd May and 8th June 2024. The data has been weighted to be representative of the UK internet user population on age within gender, and overall, to regional and SEG profiles.
This always bothered me – internet users 100%, having time and will to take survey 100% – it will always be very biased sample that can't be corrected by weighting by age/gender etc.
Despite being a Brit, X did nothing but keep my feed flooded with both left and right wing comment about the American election recently, with some extreme views on either side. For context I only follow tech accounts.
Yes - it wasn't always like that, but it would be interesting to see the "algorithm" now, wasn't Elon all about opening that stuff up?
The algorithm repo hasn't been updated in more than a year. Who knows if it was ever even the real algorithm, but it certainly isn't now unless you believe that they haven't updated it in a year.
Yes, you can find the blog post and repo through a quick search.
It's not been updated in over a year. It was a ruse: https://github.com/twitter/the-algorithm
There is a lot of resistance to change in the UK. So many businesses and public sector jobs depend on legacy processes that can be automated away, but are not. There are many reasons for this, the main being older generations tend to take up senior positions and have yet to grasp the full potential of the tech. This particularly applies to private indistry. For the public sector the goverment is so intertwined with them that they are too scared to rock the boat and loose votes or risk strikes so they ignore the elephant in the room. Finally, I would say the British culture is one of skepticism, bordering on cynicism we don't like to see winners win, like the US. There is virtue in being above competition and we secretley sneer at people who want to strive for better, so it doesn't surprise that some people would instantly dismiss the potential of AI if a colleague suggested it to them.
I’m honestly surprised at how small the drop is, because it feels like I no longer know anyone who’s using it. Haven’t heard a conversation with involving something someone read there in forever. Hear about reddit all the time.
feels like OSS early adopter folks have been trying to leave for years, legal + media people moved to bsky this month and are starting to close their twitter accounts, younger VC / founder type people are trying to stick it out
some of the oss + scifi author crowd has been making it work on mastodon but may come to bsky. (think like charlie stross)
celeb + sports accounts are joining bsky now post-election, and in theory a bunch of users that will jump with them; insta/threads may still win this slice though
rn on bsky there's an early twitter dynamic of like mark hammill following tech journalists
I suspect that bsky will have very Twitter like problems as it scales. Still might check it out before that happens.
yeah, there's a theory of social networks that you have to move to new ones as they age
I think twitter aged artificially fast after it stopped investing in moderation and also shut off external researchers' access to the firehose (they were helping discover bad actor networks)
bsky seems more open to integrations at this point; like w/ subreddits, ownership of communities by those communities makes maintenance cheaper
>rn on bsky there's an early twitter dynamic of like mark hammill following tech journalists
Lmao
Vast number of people use Facebook, but when did you last hear about anything happening on Facebook? I think Twitter has lost cultural capital faster than it has lost actual users; there are still a bunch of people there (or were til the last couple of months, anyway; there really does seem to be a sea-change now and the above study, going up to May, won’t capture it), but it’s a lot less culturally important.
Because it's a dumpster fire of shrill partisan US politics.
I'm following only SF/hn flavoured tech crowd on there - the thinking being that it'll be about tech. An intentional attempt to buy into a bubble if you will. That used to work well.
Since musk take-over and the current election cycle even that is intolerable. Tech gang no longer tweet about kubernetes etc, instead it's about immigrant, Joe Rogan and whatever has Marc Andreessen's knickers in a twist today.
Before leaving Twitter, my feed was almost entirely American politics, street fight videos, and content farm bots. Yet I only followed Australian artists and Australian furries. I watched someone else scroll Twitter and they had the exact same slop posts on their feed.
Now I’ve switched to Bluesky and my feed is immensely better.
Andreessen and Co. have always been part of this strange utopian Libertarianism where they believe with all their hearts, that tech sets you free. I say the word strange, because it's optimistic, and forward looking, and not cynical. Peter Thiel used to be the main "villain" while the others kept to themselves. Seems that's not the case anymore, and they are more comfortable speaking their minds.
Realistically though, as an outsider, I think X.com has just become their echo chamber and they've all lost the plot.
You chose who you follow, and you can block any words you like. I see basically zero politics.
Bluesky awaits your arrival. Unfortunately, there's a lot of the creepy anime avatar crowd there already but luckily the nuclear block exists on that platform.
Hopefully everyone is getting away from X
Twitter was cool about a couple of years. I haven't tweeted since like 2012. I'm not in a hurry to jump on Blue Sky.
Best Twitter, was when it was just tweets of "eating cereal for breakfast" and the pull-to-refresh-ification of everything.
There is a bluesky doomscroll waiting for you in the new echo chamber.
This is the right answer, sadly. I had moved over to Mastodon and already people are using it to raise pitchforks. I just don't think our human brains deserve the power that comes with social media.
There are only three ways to avoid being in an echo chamber:
- Use a system that allows you to choose your sources and choose to follow a diverse set of content
- Use a system that does not allow such choice and trust that its algorithm supplies a diverse set of content
- Don't use social media
I happen to choose the last one, but between the other two I believe user choice is more important. Musk has put his right wing thumb firmly on X's scale so that rules it out imo. Anywhere that respects your choices is only an echo chamber if you make it one.
I mean, really, is anyone all that interested in generative AI, barring VCs?
Hear, hear! :)
I'm pretty disappointed with the rise of reddit. In my opinion, it has a very serious confirmation bias problem.
This seems like RICU propaganda
They weren’t that into democracy either. Didn’t stop the Yanks from mopping the floor with them.
Losing the US colony was a minor inconvenience. We were busy with France.
I get that it's important to you, but US independence barely features in our history lessons.
I’m sure it doesn’t. It was such a blemish on the empire it needed to be scrubbed.
Also, how’s it going with France?!
Crafted by Rajat
Source Code