Was your Facebook data assimilated?

UPDATE: It’s no surprise and I assumed that more would be revealed, but another app associated with supposed academic research has been outed as having been an assimilator.

UPDATE: Not everyone has had *that* notification from Facebook yet telling them whether they were assimilated by Cam Anal or not…here’s a workaround, jump to this help page:

https://www.facebook.com/help/1873665312923476

If your Facebook stuff was harvested by Cambridge Analytica because someone you’re connected to used the “mydigitallife” app back before 2014, you should’ve had a notification from FB about it by now. If you haven’t seen a notification, then you it doesn’t necessarily mean you weren’t assimilated by CA or any of the other companies FB has banned since it changed its systems that year. The notifications are still being rolled out. The majority of the billion+ users of Facebook will simply get an advisory notification but some 88 million will presumably get a notice alerting them to their data having been compromised.

A contact on Facebook suggested that:

For most of us, I suspect the stuff we post on FB would just confuse anyone trying to use it

Now, I realise they’re being flippant, but it goes deeper and I think a lot of people do not realise that.

He’s a contact, but we’re not even friends on Facebook and I can see a lot about him: his career history, all of his FB friends, his photos, the names of four of his family members who are on FB (son, cousin, and a couple of others), and a lot more besides. The fact that I can see it means anyone on Facebook can see it too and that could be someone building a profile for whatever reason…ID theft, insurance company, political rival…

I can see where he’s “checked in”, countries visited, railway stations, pubs, everywhere. I can guess which mobile phone company he’s with, because he likes one of the major companies and none of the others. I can see his political persuasion and affiliations and the politics he doesn’t like. I can guess where he likes to have a drink, because he only likes one pub on Facebook.

So, although it all seems trivial…it’s kind of not, a hacker could easily build up enough information to then use social engineering (smooth talk) to speak to a receptionist, operator, bar tender, whoever, to dig deeper, perhaps build up enough info to open a bank account in his name, take out a loan…this is one of the reasons why it’s worrying; political propaganda and fake news notifications that twist democracy aside.

Friend of a friend

As I’ve said though, the current debacle is not even about what you put online…the problem with CA specifically is that an academic created an app that he paid people to use and when they (about 270,000 people) accepted the terms and conditions to use it, the app could then access all of the information that all of their friends had loaded into Facebook. This included all of the stuff that those friends had set as private. They reckon 88 million people were harvested by this app alone.

FB was called out on this issue in 2014 and blocked that app and then seemingly kept quiet about the issue. They changed their software (Graph API) so that other apps couldn’t quite do the same thing after that time. But, even now every time somebody does one of those “easy” quizzes or other app where you login with Facebook and then shares it, the quizz app company gets to peek at a huge mass of their friends’ data.

Fourth Party Data

And, of course, all that data that these third parties have harvested might be stolen by a fourth party…at least at the moment it’s pretty much hidden, but a hacker could break open their servers and post everything to the open web at any time. If we’re lucky, they stored it in encrypted form, but given the recent history of hacking, that’s unlikely, so much data is stolen and released on to the net that never was encrypted.

Cutting noise from photos

UPDATE: March 2023 – I am currently using DxO PureRaw instead of the full PhotoLab. It does the same with denoising and lens/camera corrections. I then adjust curves and levels with PaintShopPro as I had been doing prior to trying PhotoLab.

UPDATE: January 2023 – I wrote this article back in 2018, since then various programs have come on to the market that offer AI approaches to denoising photographs many of which are much easier to use and work really well. For example, the Topaz AI Denoise tool reduces noise and blur and can even reduce motion blur, as I demonstrated in an article with a photograph of a Peregrine Falcon flying overhead. DxO Photolab is my current denoise software of choice though, its DeepPrime system effectively lowers the ISO of any noisy photograph by the equivalent of about three stops (like shooting at 400 rather than 3200 but with the same shutter speed and aperture). It lens/camera corrections built-in too as well as allowing you to adjust levels, curves, saturation etc etc.


Noise can be nice…look at that lovely grain in those classic monochrome prints, for instance. But, noise can be nasty, those purple speckles in that low-light holiday snap in that flashy bar with the expensive cocktails, for example. If only there were a way to get rid of the noise without losing any of the detail in the photo.

Now, I remember noise in spectroscopy at university, you could reduce it by cutting out any signal that was below a threshold. Unfortunately, as with photos that filtering cuts out detail and clarity. So, a solution was to run multiple spectra of the same sample, like taking the same photo, you could then stack them together so that the parts that are of interest add together. You then apply the filter to cull the dim parts, the noise. The bits that are the same in each shot (or spectrum will be added together, but the random noise will generally not overlap and so will not get stronger with the adding. The low-level filtering then applied will remove the noise and not cut the image. No more ambiguous spectral lines and no more purple speckles. That is in theory, at least. Your mileage in the laboratory or with your photos may vary.

De-noising by stacking together repeat frames of the same shot comes into its own when doing astrophotography where light levels are intrinsically low. Stack together a dozen photos of the Milky Way say, the stars and nebulae add together, then you can apply a cut to anything that isn’t as bright as the dimmest and you can reduce the noise significantly. Stack together a few hundred and your chances are even better, although you will have to use a system to move the camera as time goes on to avoid star trails.

Then it’s down to the software to work its tricks. One such tool called ImageMagick has been around for years and has a potentially daunting command-line interface for Windows, Mac, and Unix machines, but with its “evaluate-sequence” function it can nevertheless quickly process a whole stack of photos and reduce the noise in the output shot.

As a quick test, given it’s the middle of the afternoon here, I went to my office cupboard which is fairly dark even at midday, and searched out some dusty copies of an old book by the name of Deceived Wisdom, you may have heard of it. I piled up a few copies and with my camera on a tripod and the ISO turned as high as it will go to cut through the gloom, I snapped half a dozen close-ups of the spines of the books. The first photo shows one of the untouched photos, with a zoom in on a particularly noisy bit.

Next I downloaded the snaps, which all look essentially identical, but each having a slightly different random spray of noise. I then ran the following command in ImageMagick (there are other apps that will be more straightforward to work with having a GUI rather than relying on a command prompt. Nevertheless, within a minute or so the software has worked its magic(k).

magick convert *.jpg -evaluate-sequence median book-stack.jpg

And, so here’s the result, well the zoomed in area of the composite output photo, the average of the six essentially identical original frames with the noise filtered to a degree from the combined image. There is far less random colour fringing around the letters and overall it’s crisper. The next step would be to apply unsharp masking etc to work it up to a useful image.

It’s not perfect, but there is far less noise than in any of the originals as you can hopefully see. The software you use can have fine adjustments, but perhaps the most important factor is taking more photos of the same thing. That’s probably not going to work at that holiday cocktail bar, but with patience should work nicely for astro shots. Of course, if I wanted a decent noise-free photo of my book, I could have taken them out of the cupboard piled them on my desk, lit them properly, used a flash and diffuser and what have you and got a really nice photo with a single frame. But, then what would you learn from me doing that other than that I still have copies of my old book?

Reviewing a BenQ “eye-care” monitor

It is quite timely that monitor manufacturer BenQ has just sent me their latest bit of kit to review. It is a 27-inch (68 cm) “eye-care” monitor. The device boasts that it addresses many of the problems facing home workers such as long periods of use, bright rooms, non-ideal environment and placement of monitors in home offices and so can help reduce the risk of eye strain, dry eyes, headaches, poor posture and neck and shoulder pain, and other problems some computer users face when using inappropriately sited monitors. There is mention of problems specifically associated with blue light from screens and monitors.

Having almost settled in with new digressive reading glasses – head-up focus at PC distance, eyes down focus for paperwork on the desk or phone, I was intrigued to see how it would feel to use such a monitor. On a point of order, I used to get awful headaches in my first publishing job working with paper manuscripts and proofs and accessing a mainframe computer via a VT100 terminal (one of those awful green light things). This was 1989, long before the web and although we had email and a database nothing was graphical in that office. Anyway, no headaches and no residual eyestrain almost thirty years later. So I’d no real need for a monitor that would reduce eyestrain but nevertheless willing to give it a go…

From the information, the monitor has “Brightness Intelligence”, detecting ambient light levels and colour “temperature” of surroundings and adjusting its output accordingly. It has different levels of blue output, specifically too, for different working conditions. I assume these can be overridden when one wants to calibrate for true colour work – photos, graphics, video editing, for instance.

The monitor is also flicker free (although I don’t think I’ve ever noticed flickering on any monitor I’ve used over that last three decades even with that old VT100 terminal). Maybe I am not consciously sensitive to that kind of flickering, although overhead mercury tube fluorescent lights do sometimes make me nauseous. My contact at the company suggested revealing flicker on my old monitor and the review monitor using a smartphone video capture. But, I am not entirely sure what that would prove other than the inadequacies of smartphones and frame refresh rate. If you don’t perceive/see an issue, then it’s not really an issue in this case.

An extra feature that I have not seen in any other monitor is smart focusing whereby the Window being used at any given time is highlighted more than other windows. How this works when one is working with side-by-side documents remains to be seen. Although the choice to focus on a given window is made by the user at any given time. However, my contact at BenQ tells me this is a function aimed at those watching video in a given window.

The eye-care monitor also has a High Dynamic Range (HDR) which means detail in the blacks and the highlights is more akin to how we perceive the world around us rather than the compressed world of photos and images, where very dark greys become smeary black and off-white highlights are simply blown out, as photographers would say. There is a greater colour palate than might ordinarily be found in a TV screen. That makes the computer monitor more suitable for high-resolution video rendering, and for 4K HDR gaming consoles than a lesser TV. Technically, the monitor boasts three times the contrast than normal panels, 33% greater brightness, and up to 93% DCI-p3 colour coverage. It can be setup for improved “eye-care” and for those who need high-quality video, graphics, photo editing and processing.

I am not entirely sure why the panel has a standard that makes the top lean forward further than the bottom of the screen, but more to the point there is no universal fixing bracket so I cannot install it on my adjustable cantilever desk arm and set it at the perfect height and angle for my posture and the way I work.

However, for me personally, there is a more problematic issue with such a large screen regardless of the quality or eye-care features. My new spectacles. They are digressive, which means when viewing something about 60 cm away straight on the view is nice and sharp but a movement of the eye left or right up or down as one might do frequently with a 68 cm monitor means that there is some distortion across my field of vision because the lenses are designed to focus closer towards the lower edge and indeed upper and lateral edges. Someone with 20:20 vision, or presumably conventional reading lenses, would not suffer this effect, but I am not sure I can work with the need to move my head so much to maintain focus, when I am used to a much narrower computer monitor, albeit with the same resolution. Of course, BenQ makes monitors from 21.5 to 32 inches in this range, so maybe there is a model that would suit my new specs.

The monitor is specifically a BenQ EW277HDR, which seems to be billed as a “video enjoyment” monitor elsewhere in the monitor market rather than focusing on the eye-care aspects.

Is it okay to kick a robot?

UPDATE: One of their robots can now dance, which is the way forward, please don’t weaponise them, just let them twerk and moonwalk

These robots can now open doors for each other and let themselves out…just sayin’

By now, you’ve probably seen the astounding quadruped robots that have been built and demonstrated by Boston Dynamics. These machines run like four-legged animals and don’t seem to mind when their human companions give them a kick…hold on…give them a kick? Is that really the best example to set impressionable people watching the videos?

One could argue that it’s a machine, it doesn’t “mind” being kicked, if that demonstrates just how robust the software and servos are to disturbances in the forces around them. But, it is still quite a disconcerting thing to see. The next generation might be togged up with heads and fur, for instance, to make them look even more like animals, that would make for even more uncomfortable viewing, I reckon. And, then, of course, ultimately, such a robot might be endowed with artificial intelligence, sentience, even. Would kicking a bot that knows what you’re doing be moral?

This also raises another question. If we build sentient robots, would it be sensible to give them pain receptors? Would we want them to know to avoid things that might hurt. And, Asimov aside, might a robot in pain having been kicked feel that retaliation was the ethical thing to do from its perspective?

Asimov on the three laws of robotics

Laws of Robotics are essentially the rules by which autonomous robots should operate. Such robots do not yet exist but have been widely anticipated by futurists, novels, in science fiction movies and for those working in or simply interested in research and development in the fields of robotics and artificial intelligence are important pointers to the future…if we are to avoid a Terminator or Matrix type apocalypse (apparently). The most famous proponent of laws for robots was Isaac Asimov.

He introduced them in 1942 in a short story called “Runaround”, although others had alluded to such rules before this. The Three Laws are:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Here’s the man himself discussing the three laws:

Grant a pardon to Alan Turing

UPDATE: Finally getting around to updating this page, it is ten years since Alan Turing was granted a posthumous pardon (2013)

A new petition is now online seeking a pardon from the UK government for mathematician Alan Turing. In 1952, he was convicted of ‘gross indecency’ with another man, under an archaic law that no longer exists. He was given so-called “organo-therapy” (chemical castration) and two years later, killed himself with cyanide-laced apple, aged just 41.

Turing invented the concept of the modern computer, devised the first real test for artificial intelligence and cracked the German codes to help bring WWII to an end.

Turing also figured out how the leopard got its spots and what makes zebras stripy. Seriously, he figured out that the diffusion of cyclical chemical patterns in the growing embryo, particularly in pigment cells would give rise to waves or rings of different pigment across the skin.

The 5 Ws (and How) of writing for the Web

Steve Buttry presents the five six questions that should guide your reporting as you interview, observe and research to gather the facts for a story, whether that’s live tweeting from a conference, a facebook update, a blog post or your first long-form feature for an online magazine.

They can, he says, also raise ethical issues you should consider as well as helping you home in on links, graphics and people with which to build your article.

The 5 W’s (and How) of writing for the Web « The Buttry Diary.

Google doodle celebrates vitamin C discoverer

Google doodle celebrates vitamin C discoverer – Today, Google celebrates the birthday of Hungarian physiologist Albert von Szent-Györgyi de Nagyrápolt (September 16, 1893 – October 22, 1986) who discovered vitamin C and the components and reactions of the citric acid cycle. He was awarded the 1937 Nobel Prize in Physiology or Medicine. He was also active in the Hungarian Resistance during World War II and entered Hungarian politics after the war.

New Facebook friends and blogging advice

If you’ve been on Facebook for any length of time you will have had friend requests from people you don’t know. That’s fine. Often they’re just spammers. Sometimes, they’re users with whom you might have a few friends in common. If paths haven’t crossed I usually redirect requests to the Sciencebase Facebook page instead of automatically accepting the request. Occasionally, the new wouldbe friend turns out already to “like” the page, says so and starts a conversation. Also fine. Half proves they’re not some kind of bot. Virtual friendships can spring from such occurrences. It’s what this social media lark is all about, right?

Indian medical blogger Pranab Chatterjee who runs Scepticemia, sent me a friend request and I went through the process described above and he pointed out that he already liked the fan page, was surprised to learn I also run Sciencetext and wondered how I manage to juggle so many words at once. He also thought that I might be able to offer him some advice on boosting visitors to his blog as he felt like it had reached a plateau. He wanted the recipe for my secret sauce of success…well I don’t have one, I just work (probably too) hard and hope for the best. So, I turned the tables on Pranab and asked him what I could do to improve my blog.

He was a little taken aback, but offered some encouraging words about the liking the clean look of the blogs and putting in a request for more hardcore medical posts, he’s a doctor, hence the interest. I do write about medical matters, but I will probably leave the hardcore stuff to the hardcore medical bloggers (and I don’t mean Dr G)

Anyway, if there is a recipe for blogging success, other than going black hat it has to be plenty of persistence a wadge of hard work, and perhaps a very strong background in the subject on which you’re blogging coupled with experience in the wider journalism industry and/or experience in science (or other field) and the conference circuit. I think enjoying writing probably helps as does have an analytical approach. Being active in social media, particularly Facebook and Twitter seems to help raise one’s profile although is not really reflected in traffic, in my experience (a few hundred to a thousands new visits extra each week, perhaps) and the occasional spike. Others might perceive that differently, but more than 80% of Sciencebase traffic is still search engine derived. Nevertheless, make sure every new blog post is as perfect and precise as you can, and then tell your Facebook followers, twitter crowd about it. It’s also always worth doing a spot of whitehat SEO, just to improve search traffic benefit.

That’s just a few, almost random thoughts. Bottom line is: if you enjoy writing, you probably should have a blog. If you enjoy people you probably should make friends.