So, I'm preternaturally loathe to embrace tech giant schadenfreude simply because it fits into "Never Trump" desires to overturn a presidential election and gives psychological relief to the losing side. I would, in addition, observe the following:
- If you haven't paid for the service (i.e. posting Gigabytes of cat videos for perpetuity), somebody is.
- If you're using the service, you're the product (which doesn't bother me simply because I like the give and take of opposing credos and ideas, it generally results in needed revisions to ill conceived first drafts)
- People put aspects of their life on-line to get social media attention and adulation. Given that "user need", it seems silly to believe anything placed in that electronic coliseum will somehow be subject to default privacy mandates unless the user utilizes the privacy tools made available by Facebook.
- At worse, Facebook data is utilized for a more focused form of advertising, which I've been inundated with before Facebook escaped the Harvard quad. So far I haven't fallen victim to QVC addiction, or letting an anonymous poster direct any action of mine.
- In any case, Facebook is a piker compared to folks who are really serious about meta data analysis (i.e. Google).
From my standpoint, the availability of the platform has more value to me than the opportunity cost of making available knowledge of my personal preferences to some third party looking for a better way to sell their goods (it's analogous to Kroger's keeping track of my purchases and offering me a 3 - 4% discount on my purchases). Is there a higher, more serious cost to the use of this technology? Probably, but I doubt it has to do with getting a better response to advertising "cold calls".
When I construct software hazard analysis for non-product software, I'm most concerned with identifying hazards associated with high severity harms (I tell my team, we can live with changes in estimated hazard, or harm, occurrence, but we don't want to miss potential high severity harms). The trickiest hazards to dig out, are associated with algorithms we don't fully understand the behavior of across all possible inputs. The danger in this set of circumstances is we get a result we trust, but we don't really understand how the result was produced. Which I guess is another way of saying we are probably ill-advised to allow "expert" systems to make decisions for us. That puts us on the path of becoming a "global useless class" as a species. I don't see how that works out well for us.
Alt-J (an alternative indie band I like) has an album called "This is all yours", which works on expanding some of the themes found in movies from the 1980's which impressed one of the band's writers. "The Gospel of John Hurt" draws from the movie "Alien" (John Hurt was the actor who played a character who birthed the first alien in the movie in a very unusual fashion (hard to forget if you've seen the movie)). That type of biological entity would be catastrophic for any species actually encountering it. Needless to say, "The Gospel of John Hurt" is not a gospel of hope. Let's hope we understand the AI algorithms we're developing better than the crew of the Nostromo understood it's alien passenger in "Alien".
