Strongly agree. The failure to address these things for what they are just normalizes them.
Strongly agree. The failure to address these things for what they are just normalizes them.
Hopefully the next places will be more durable. It is still SAD and damaging when vibrant communities get destroyed though. I am more lamenting that.
People haven’t adjusted yet to the reality that online social ecosystems matter, they affect so much in the real world. Decimating multiple online spaces in such a short time has consequences and i hate that a handful of random guys with no stake in any of it except money get to make decisions like that.
You have articulated exactly how I feel whenever I see that word in a headline haha.
I feel you’re coming at this from an abstract angle more than how these things actually play out in practice. This isn’t reliable software, it isn’t proven to work, and the social and economic realities of the students and families and districts have to be taken into account. The article does a better job explaining that. There are documented harms here. You, an adult, might have a good understanding of how to use a monitored device in a way that keeps you safe from some of the potential harms, but this software is predatory and markets itself deceptively. It’s very different than what I think you are describing.
Yeah, I just fundamentally don’t think companies or workplaces or schools have the right to so much information about someone. But I can understand that we just see it differently.
An issue here for me is that the kids can’t op out. Their guardians aren’t the ones checking up on their digital behavior, it’s an ai system owned by a company on a device they are forced or heavily pressured to use by a school district. That’s just too much of a power imbalance for an informed decision to my mind, even if the user in question were an adult. Kids are even more vulnerable. I do not think it is a binary option between no supervision and complete surveillance. We have to find ways to address potential issues that uphold the humanity of all the humans involved. This seems to me like a bad but also very ineffective way to meet either goal.
Kids going to school cannot reasonably be expected to have the knowledge, forethought, or ability to protect themselves from privacy violations. They lack the rights, info and social power to meaningfully do anything about this. That’s why it’s exploitative and harmful. Edit: that’s also to say nothing of the chilling effect this is going to have on kids who DO need to talk about something but now feel they have to hide it, or feel ashamed of it. Shit is bad news all around.
This is awful. Surveillance is not a replacement for childcare. How many times must people say it. It is also not a replacement for managing employees or any other thing. I hate this timeline.
This was a great read. These dynamics are so prevalent.
It isn’t.
Breitbart? No, thank you.
Yeah. I am trying to find ways to disengage from the nonsense without disengaging from my like, actual responsibilities to my society. But the jury is extremely out on how I do that right now. Having my emotions (and everyone else’s)manipulated for the gain of others no longer feels useful or like staying informed.
These people need to redirect their followers’ attention and anger onto literally anything but real circumstances. I’m so tired, it keeps working.
Yep, I use it. I like it.
Thanks for posting this context. I’ve been wondering about this aspect of this event
A disabled and chronically ill writer, who goes by @broadwaybabyto on social media, views masking as community care, saying she wears a “mask to protect others and show solidarity with all disabled and vulnerable people.… Many of us have sacrificed four and a half years of our lives, going to great lengths to preserve whatever health we have left.… As more and more COVID restrictions were dropped, masks remained as the single best accessibility tool disabled people had.… Taking that away…tells us you want us dead.… You don’t want us in your world. And it hurts.”
This part.
I want to believe this will remove the plausibility deniability for at least some voters and tank his chances, but. I’ve believed that before.
I daresay I’m feeling a little hope.
I think it’s also relevant that when I was growing up, people regularly changed between public and private depending on life circumstances, friend groups, etc. It was billed as a way to switch between people seeing your posts or not, NOT as a way to revoke or grant Facebook or any other entity any specific permission. It served a social function, and at a time when AI did not exist. They changed the meaning of that on us years after the fact and I have not seen any article address that. No teenager in 2011 was thinking of the private/public setting as consent for ai use, and none of these articles talk about pictures that were set to private after being public for a while. It’s bad faith