The legal minefield of 3D printed guns

 

Richard Matthews, University of Adelaide

3D printed guns are back in the news after Queensland set a legal precedent for giving Kyle Wirth a six-month suspended sentence for fabricating a number of gun parts.

As presiding Judge Katherine McGuinness acknowledged, Wirth didn’t produce an entire gun – it took police to add a few key parts in order for the gun to successfully fire a bullet – but he was “trying to make a gun”.

As such, she said “there is a real need to deter and protect the public from such offending”.

But if it’s illegal to build a gun via conventional means without a licence, what’s the concern over making guns using 3D printers in particular?

And for those who are either researching the capabilities of 3D printers – a form of additive manufacturing – or using them at home or in their business, it’s important to understand the legal boundaries under which they can be used.

3D printed firearms in Australia

3D printed guns currently occupy a grey area in terms of their legality in many jurisdictions around Australia. For example, the South Australian Police released a guide outlining which kinds of imitation firearms are considered legal.

The distinction between a “regulated imitation firearm” and a children’s toy is significant, as a South Australian man discovered in 2015. He was charged with a firearms offence after police found a toy gun in a box along with a single shotgun shell.

The judge acquitted him because the gun was clearly a child’s cap gun and could not be modified to fire the shell.

However, according to the South Australian Police’s guide, the “gun” pictured at the top of this article, although non-functional, is technically neither a “moulded imitation firearm” nor is it an “imitation firearm carved from timber, plastic or other material”. This means it’s unclear how it would be regarded by police or the courts.

New South Wales takes a different approach on the issue. The Firearms and Weapons Prohibition Legislation Amendment Bill 2015 made it illegal to possess digital files that can be used to manufacture firearms on “3D printers or electronic milling machines”.

The act was amended “to create a new offence of possessing digital blueprints”, although the definition of a “digital blueprint” is a little ambiguous. As defined, it captures “any type of digital (or electronic) reproduction of a technical drawing of the design of an object”. As written, this could even mean a photograph of a technical drawing. But technical drawing files are not always needed for 3D printing.

In 3D printing, drawing files are used to create GCode, a computer control language used to guide the print head and the amount of plastic to extrude. Is GCode a digital reproduction? Even if it is, it does not stop someone 3D printing gun parts in another jurisdiction in Australia or overseas where they’re not illegal and then posting it back to NSW.

Is a list of coordinates in three dimensional space a digital reproduction of a technical drawing?
Author provided

It was this fear that drove the Queensland Palmer United Party to introduce a bill in 2014 to make 3D printing of firearms illegal. It was rejected by the parliamentary committee and never reintroduced.

When Labor took power in Queensland following the 2015 election, it defended the move and released a statement stating that “Queensland already has legislation dealing with the unlawful manufacture of weapons that carries with it some of the harshest penalties in Australia”.

Hence Kyle Wirth was charged in 2015 with manufacturing offensive weapons, including a plastic knuckle duster. He was not charged under any legislation that prevented him from 3D printing parts, as the PUP bill would have outlawed.

Plastic or not, it is illegal under nationally unified gun laws to make a gun without a licence. If this is the case, why did NSW feel the need to ban digital blueprints? The answer could come from the future prospects of 3D printing.

The parts Wirth printed and stored in bags.
Supplied: Queensland Police Service

Towards the future

In the next 20 years we will be able to print drugs, metals and substances at an atomic level – possibly all at home.

Regulation of these things is currently predicated on the idea that producing them typically required expertise and specialised equipment. But that may no be the case for long.

This will mean we need a new unified approach to legislation that specifically speaks to the capabilities of 3D printers, and the distribution of the files they use.

New South Wales is the only state that has started outlawing the digital blueprints needed for additive manufacturing of illegal objects. This is a step in the right direction.

However, we need a classification of digital blueprints. AustralianClassification is already responsible for passing judgement on a wide array of media. In the future we will likely see such an agency extended to cover digital blueprints available or for sale to the public.

The Conversation

Richard Matthews, PhD Candidate, University of Adelaide

This article was originally published on The Conversation. Read the original article.

Advertisements

Apple’s Million Dollar Security

Apple inc. often boasts about its security capabilities. As announced at WWDC16, there has still been no malware seen at scale affecting iOS devices. This is vitally important when we consider the sheer volume of personal information stored on our mobile devices. But what gives Apple this secure advantage over other mobile platforms? Per their WWDC16 presentation it comes to three core “iOS Security Pillars”. As my background is as an electrical engineer, I’ll only focus on one of these… the hardware or platform.

iOS Platform Security

The key feature in iOS security lies in the very hardware of the devices. Security is literally built into the silicon at the device core. Every phone stores Apple’s public keys in physical silicon within the device in a section of Boot Rom inside the Application Processor. This key is then used to boot the device in a lock chain starting at the Low-level bootloader, iBoot and then Kernel. If any of these processes are not signed with Apple’s private keys then the device will refuse to boot. This key is also then used to authorise updates for each device as well. If the key is missing, the update doesn’t happen. This ensures that a device remains within the Apple ecosystem and does not fall prey to a man in the middle attack.

IMAGE: Apple’s public key is used to verify each stage of the iOS boot has been digitally signed with Apple’s private key.

The only place this private key is stored is within Apple itself. One can assume that is heavily guarded and secured.

The Apple public key is not the only key used on the device to ensure security. User keys are stored within the physical silicon design of the A7 or later processors. This area of the processor is known as the Secure Enclave (SEP). The SEP setup during manufacturing with unique data that is NOT known to apple. This data is used to then encrypt the storage of the SEP and is used as the basis for the cryptographic functions. This means that users data, such as fingerprint data, is encrypted at the most basic level and cannot even be accessed by Apple. Furthermore, if a device has more than 10 unsuccessful unlock attempts the SEP will refuse to communicate with the device. FBI, eat your heart out.

IMAGE: Apple stores user keys encrypted in the “Secure Enclave” a section of the physical silicon hardware of the A7 Processor.

Another feature of the platform is the fact that applications are run within their own sandbox. This process isolates data between the app and other apps to ensure security should a developer have made a mistake that creates vulnerabilities within the system. Combined with transparent permissions means users know exactly what an app can do. This doesn’t solve developers requesting permissions that are not needed but, that’s a work in progress. Every app is also signed digitally by apple to ensure anything running on the phone is legitimate. An attacker’s first step is usually to execute malicious code. If this code isn’t signed, it won’t be run.

The last key to the platform lies in the Touch ID. Apple didn’t just install a fingerprint sensor. Apple made sure this data was encrypted at the most basic level so that even they can’t access your biometrics. This is stored in the SEP and users report that the fingerprint sensor cannot simply be replaced. The inclusion of Touch ID has seen an increase of users using passcodes on their devices from 1 in 2 to 9 in 10.

Apple highlights the importance of security in the WWDC16 presentation. As with all measures though, people always attempt to circumvent and find flaws. Unlike other platforms though, Apple’s measures generally require exploits from 5-10 vulnerabilities to create a jailbreak. Such a jailbreak is estimated to have a black-market value of over $1million USD.  Comparison exploits on the black-market are estimated to sell from between $5,000 to $15,000 USD.

Is Apple’s security really a million-dollar business? In ten years of functioning they have not seen “any malware at scale” and have even fought off attempts by nation states and others to compromise data. You may not be an Apple “fanboi” but there are certainly lessons we can learn from their approach to security as a process and not a destination. My only hope is that they incorporate the same SEP technology in their other computer products especially with the release of their new Apple File System (APFS).

The Body of Knowledge

I want to introduce you to a concept today that I call the body of knowledge. I’ve seen it crop up in many places across the internet and it seems to be a common lesson but many people still seem to forget it and its meaning. As a PhD candidate with a family, an active student politician, community volunteer and friends there are many demands on my time. By putting my studies into perspective it helps realise the PhD isn’t some all consuming beast that some make it out to be.

First lets start with a circle which represents all of the knowledge known by humanity.

circle01

As we go through each stage of our schooling the amount we learn can be represented within this circle, starting from preschool and primary school moving up all the way to our final years in high school.

circle02

Many of us stop here, satisfied in knowing enough to live a fulfilling life within society. But, many of us continue onward and obtain a bachelors degree. With this degree gives you a specialisation in a unique field.

circle03

A masters degree furthers this specialisation through independent research or further courses.

circle04

Eventually though, higher education culminates in the pursuit of a doctorate. The first stage of this is a literature review which takes you to the very boundary of human knowledge about a singular topic.

circle05

Until one day, you make a discovery and push the body of knowledge just that little bit further.

circle06

But, in the grand scheme of things, the circle still bears its shape. To an outsider it seems no different than before.

circle07

Sometimes your studies may seem overwhelming, especially in the pursuit of a PhD, but take a step back every now and again to asses the bigger picture.

spinning-phd-plates-2-640x536

 

Don’t drop those plates.

A properly balanced PhD is possible.

Nuclear: The Unlikely Environmental Hero | Adelaide University Union

Words by: Richard Matthews for On Dit
via Nuclear: The Unlikely Environmental Hero | Adelaide University Union

Netflix is a fantastic service for procrastination and education alike. I recently binge watched Kampen om tungtvannet or The Heavy Water Wars, a miniseries documenting the lives of key players during WWII over the procurement of deuterium oxide for the development of the Atomic Bomb. One of them was Werner Heisenberg, known famously for his uncertainty principle. The miniseries leads us to believe that Heisenberg’s attempts to build a nuclear reactor was to create limitless energy, not as a weapon, but as a means for humanity to excel: a solution to a greener future.

SRC Councillor Jack Crawford, has already made the case for why you should defy our ‘militant, nuclear chancellor’ and attend the antinuclear rally. I’m here to discuss why you should get to know nuclear before condemning it so that, like Heisenberg, humanity may excel.

The issue of a nuclear waste dump is more complex than whether we should store the world’s high-level waste in South Australia. It is more nuanced than breaking the militant, nuclear weapons proliferation. The public perception is so against nuclear that current federal laws prevent nuclear plants from generating electricity (even though a local nuclear plant may have prevented the recent state wide blackout). If we are serious about solving the problems of climate change, affordable power and energy security, nuclear needs to be a viable option.

In a great 14-minute talk, Michael Shellenberger, environmental policy expert, argued for nuclear power plants that would break the nuclear weapon cycle and solve the renewable energy crisis.

As Shellenberger discussed, public perception is a massive hurdle. A fuel dump is the start to ease public perception on this overly emotive issue while being responsible for the Uranium we mine. The ultimate end game has to be the introduction of thorium reactors as a way to provide stable base load generation, to complement our renewable energy targets.

Crawford is right; it is a student issue, getting a job after uni is a student issue. With a nuclear dump comes approximately 9,600 jobs, many of them coming from the fields of civil, mining, electrical and mechanical engineering.

These are all fields that are suffering significant depression for new graduates in South Australia. There are also flow-on jobs with business, regulatory, management and legal fields all also seeing new jobs created.

If you want to attend the rally, get to know nuclear first. The benefits of nuclear energy to the environment are greatly under played. Like Heisenberg, we are faced with a choice: We can either jump on the bandwagon, or we can choose to support the greater good. We’ve seen too much politicizing of important issues recently; let’s have science win especially when nuclear may be the solution to a healthier environment and a greener future.

If you want to get to know nuclear you can by visiting www.nuclear.sa.gov.au

Richard Matthews is the Disability Officer and the Postgraduate elect for the Student Representative Council. He is a member of the University’s Academic Board, holds a Bachelor of Electrical and Electronic Engineering with Honours. He has five years of experience working in Power or Engineering related fields.

 

 

Boycotting the Census

via Boycotting the Census

 

Why I’m taking leave of my Census: a privacy expert’s reluctant boycott

shutterstock_316469465_blurred crowd_cropped

Dear Magistrate,

In case the ABS is prosecuting me for non-completion of this year’s Census, I thought I should explain to you my reasons why I have decided that a boycott is the only moral position I can take.

The short version is this:  Yes to a national snapshot.  No to detailed data-linking on individuals.  That’s not what a census is for.

I have wrestled with what my personal position should be.  I am normally a fan of the Census.  It has an important role to play in how we as a people are governed.  As a former public servant with a policy and research background, I believe in evidence-based policy decisions.  As a parent and a citizen, I want good quality data to help governments decide where to build the next school or hospital, or how to best direct aged care funding, or tackle indigenous disadvantage.

But as a former Deputy Privacy Commissioner, and a privacy consultant for the past 12 years, I can also see the privacy risks in what the ABS is doing.

Months ago I wrote an explanation of all the privacy risks caused by the ABS’s decision to keep and use name and address information for data-linking, in the hope that reason would prevail.  I was assuming that public and political pressure would force the ABS to drop the proposal (as they did in 2006 when I was Chair of the Australian Privacy Foundation and we spoke up about it).  Lots of people (as well as one penguin, the marvellous Brenda, the Civil Disobedience Penguin), are now coming to realise the risks and speak out against them, but right now, just a few days out, it looks like the ABS is pushing ahead regardless.

There are those who say that we shouldn’t boycott the Census because it is too important.  To them I say:  Bollocks.  (If you pardon my language, Your Worship.)  We know where that ‘too big to fail’ argument leads: to more arrogance, more heavy-handed treatment of citizens, more privacy invasions.

And there are the demographers who say the Census data should be linked to other health records like PBS prescription records, because if we as patients were asked for our identifiable health data directly, we would refuse to answer.  To them I say:  Hello, THAT’S THE POINT!  It’s my health information, not yours.  You should ask me nicely, and persuade me about your public interest research purpose, if you want access to my identifiable health records.  Maybe then I will say yes.  But going behind people’s backs because they would refuse their consent if asked is not what the National Health & Medical Research Council’s National Statement on Ethical Conduct in Human Research is about.

This morning I suddenly realised: the ABS is behaving like a very, very bad boyfriend.  He keeps on breaking promises, pushing boundaries and disappointing you, but you forgive him each time.  You don’t want to call him out in case then he gets angry and dumps you.  So you just put up with it, and grumble over drinks to your girlfriends.

And this bad boyfriend keeps saying these reassuring things, like “oh we’ll only keep the data for four years”, and “the names and addresses are in a separate database”.  To that I say:  Nice try, but that’s a red herring.

Although there are certainly heightened privacy and security risks of accidental loss or malicious misuse with storing names and addresses, the deliberate privacy invasion starts with the use of that data to create a Statistical Linkage Key (SLK) for each individual, to use in linking data from other sources.  Please don’t believe that SLKs offer anonymity.  SLKs are easy to generate, with the same standard used across multiple datasets.  That’s the whole point: so that you can link data about a particular individual.  For example, Malcolm Turnbull would be known by the SLK URBAL241019541 in the type of datasets the ABS wants to match Census data against, including mental health services (yes, mental health!) and other health records, disability services records, early childhood records, community services records, as well as data about housing assistance and homelessness.

Anyone with access to these types of health and human services datasets can search for individuals by generating and searching against their SLK.  All you need to know is their first and last names, gender and date of birth.  Scott Morrison is ORICO130519681.  Kylie Minogue is INGYL280519682.  Deltra Goodrem is OOREL091119842.  Now tell me that privacy will be absolutely protected if Census data is coded and linked using an SLK as well.

Never mind four years; the ABS could destroy all the actual name and address data after only four days or four seconds – but if they have already used it to generate an SLK for each individual Census record, the privacy damage has been done.

(Oh, and that line about how “we’ve never had a privacy breach with Census data”?  To that I say:  Great!  Let’s keep it that way!  DON’T COLLECT NAMES.)

So I say no.  No.  I am not putting up with that bad boyfriend any longer.  I believe in the importance of the Census, which is why I am so damn pissed off (sorry again Your Worship) that the ABS is being such a bad boyfriend to the Australian people: trashing not only our privacy, but the value of our data too.  It’s time to break up with them.

I have come to this decision with a heavy heart.  I am normally a law-abiding citizen.  Plus, I don’t really fancy facing a $180 fine for every day that I refuse to comply with a direction to complete the Census, with no cap on the number of days.  (Seriously, what kind of heavy-handed law is that?  Are you really going to keep hitting me with daily fines for the rest of my life, Your Worship?)

I know that I could give the ABS misinformation instead.  Say my name is Boaty McBoatface and that I am a 97 year old man living with 8 wives, that I have 14 cars, my language at home is Gibberish and that my religion is Jedi.  Giving misinformation is a common, rational response by about three in ten people who want to protect their privacy when faced with the collection of personal data they have no choice about.  Of course, that is also a crime in relation to the Census, but at least that one maxes out at an $1,800 fine.

But I won’t do that, because I do believe in the integrity of the census data.  I don’t want people to have to give misinformation in order to protect themselves.  We shouldn’t be placed in that position.

The definition of ‘census’ is “an official count”.  I actually want to stand up and be counted.  Butonly counted; not named or profiled or data-matched or data-linked, or anything else.  The privacy risks of doing anything else are just too great.

I have thought about just refusing to provide my name.  But even if I don’t give my name, if the ABS is determined to link my Census data with other datasets, there would be enough other information in my Census answers (sex, age, home address, previous home address, work address) to let them proceed regardless.  It won’t be enough to protect my privacy.

So until the ABS reverses its decision to match Census data about individuals with other datasets about individuals, I am not going to answer the Census questions at all.

I am sorry, Your Worship.  I don’t like being forced to choose, because I believe Australians deserve to have both good quality statistical data for government decision-making, AND their privacy respected.  But on Tuesday night, I will choose privacy.

The Census should be a national snapshot, not a tool for detailed data-linking on every individual.  Now convict and fine me if you disagree.

Yours sincerely,

Anna Johnston

Pokemon Go – Trusting an Augmented World

Pokemon Go is a viral Augmented Reality (AR) game for Android and iOS. Recent revelations show that some users have been required to grant the app full access to their Google account prompting many to start citing security concerns.

AR is a new genre of gameplay which takes interaction by the player in the real world as part of its basic controls. Location based games like Pokemon Go were first made mainstream by Niantic with their release of a location based, smartphone game called Ingress. Ingress was hailed as a way for “nerds to get in shape” with the novel concept that to move about the games map users physically had to walk to locations called portals. These portals are places where users battle out for control of land based areas by deploying objects called resonators to take control of the portal  and obtain that portal’s key. Players are then able to link 3 portals together if the have the correct keys to create a field. These fields earn the player’s team points of which there are only two; the Enlightened (affectionately known as frogs owing to their green colour) and the Resistance (known as smurfs due to being blue). Ingress gained a cult following owing to its unique gameplay and conspiracy like storyline. See the Wikipedia article for more indepth discussion.

Ingress-screenshots

Ingress gameplay screenshots

When Ingress started, Niantic allowed players to submit locations as portals. These locations where supposed to be places of cultural significance such as landmarks, artwork, education institutions and religious buildings. This leads us to Pokemon Go. A new location based AR game built by the same company that made Ingress; Niantic Labs. In making Pokemon Go, Niantic have ported much of the map data they have gathered over the nearly 4 years since its closed beta in November 2012. This has meant that many of the user submited portals in Ingress are now PokeStops or Gyms in Pokemon Go. Two games, one set of map data.

This has not been without its own issues. What once made perfect sense as a portal in a game which had based into its storyline secrecy now may not make sense as a gym in Pokemon Go.

Screen Shot 2016-07-13 at 1.38.36 PMRoundAboutIssue1

In fact, a house built out of an old church has been mistakenly identified as a portal in Ingress and made its way as a Gym in Pokemon Go.


boon_sheridanHowever, the most startling revelation seems to be that iOS users installing the app have granted Niantic full access to their Google account in order to log in. Full access means the application can read and send emails on the user’s behalf without prompt, view, edit or delete the contents of Google drive, browse your search history or perhaps more concerningly, access Maps navigation history. [Update:Turns out this may have been misreported in the media hype. Full access doesn’t mean the above but, rather access to all data in your account such as name, address, birth date with edit permissions]

Niantic made comment on the situation in a statement provided to The Verge:

We recently discovered that the Pokémon GO account creation process on iOS erroneously requests full access permission for the user’s Google account. However,Pokémon Go only accesses basic Google profile information (specifically, your User ID and email address) and no other Google account information is or has been accessed or collected. Once we became aware of this error, we began working on a client-side fix to request permission for only basic Google profile information, in line with the data that we actually access. Google has verified that no other information has been received or accessed by Pokémon Go or Niantic. Google will soon reduce Pokémon Go’s permission to only the basic profile data that Pokémon Go needs, and users do not need to take any actions themselves.

So it would seem that this was just a careless error on behalf of the developers. But is this acceptable? Or is this a case of developer culture?

Many applications require extensive permissions in order to function. Perhaps the most widely discussed is Facebook’s Messenger and its permissions:

This app has access to:

Identity

  • find accounts on the device
  • read your own contact card
  • add or remove accounts

Contacts

  • find accounts on the device
  • read your contacts
  • modify your contacts

Location

  • precise location (GPS and network-based)
  • approximate location (network-based)

SMS

  • edit your text messages (SMS or MMS)
  • receive text messages (SMS)
  • send SMS messages
  • read your text messages (SMS or MMS)
  • receive text messages (MMS)

Phone

  • read phone status and identity
  • read call log
  • directly call phone numbers
  • reroute outgoing calls

Photos/Media/Files

  • modify or delete the contents of your USB storage
  • read the contents of your USB storage

Storage

  • modify or delete the contents of your USB storage
  • read the contents of your USB storage

Camera

  • take pictures and videos

Microphone

  • record audio

Wi-Fi connection information

  • view Wi-Fi connections

Device ID & call information

  • read phone status and identity

Other

  • receive data from Internet
  • download files without notification
  • control vibration
  • run at startup
  • draw over other apps
  • pair with Bluetooth devices
  • send sticky broadcast
  • create accounts and set passwords
  • change network connectivity
  • prevent device from sleeping
  • install shortcuts
  • read battery statistics
  • read sync settings
  • toggle sync on and off
  • read Google service configuration
  • view network connections
  • change your audio settings
  • full network access

These may seem extensive and even that the application wishes to spy on you however, as has already been discussed extensively, these are relatively harmless and needed for much of behind the scenes operation of the application.

So does Pokemon Go really need full access to a Google account to run an account? Hell no. What we have seen here is simply a developer using a stock template, most likely from the days when Niantic was owned by Google, and forgetting to change the default permissions in the file. A careless error. One that would not have been tolerated by IBM’s elite Black Team back in the glory days of programming. A team whose sole job was to break your code in most horrific ways possible.

I premise that this is the true issue to come out of this: that programmers have become careless and do not error check their code for bugs enough. Users should not have to worry about cyber security as the most secure option should be the default. It should be the programmers responsibility to ensure their applications are trustworthy.

In the meantime, it seems the security concerns  regarding our online behaviour with respect to these games is not going to be the issue but, rather, our real world counterparts that aren’t so 90’s child friendly.

9ryvxqh

 

The Playboy centrefold at the centre of computer science

This article was originally published on May 11, 2015 for The Conversation. Read the original article.

Richard Matthews, University of Adelaide

The November 1972 issue of Playboy magazine is the magazine’s best selling issue of all time. This is not because of the articles, but due to the proliferation of one iconic image from the magazine: that of centrefold model Lena Söderberg.

The original image was digitised by researches at the University of Southern California Signal and Image Processing Institute (SIPI) in 1973. Alexander Sawchuk, the assistant professor of electrical engineering, his graduate student and the SIPI lab manager were frantically looking for a new image for a research paper.

They had already exhausted the stock of usual test images. It was at this moment – according to legend – that a colleague walked in with the November 1972 issue of Playboy. Seeing the predicament that the researches were in, he tore a 5.12 inch strip from the top of the centrefold and fed it to their scanner. As it had a resolution of 100 lines per inch, the resulting image was the perfectly cropped head and shoulders image 512 x 512 pixels in size.

This image has since been used widely in imaging processing circles. That’s because the nature of the image makes it amenable for testing a wide range of image processing algorithms.

The image contains a mixture of detail, colour, shading, focus, textures, reflections and flat regions that allow testing of multiple algorithms. These algorithms range from edge detection to denoising and even include shrinking the image down to the size of a human hair.

Pornography in the lab

Given the provenance of the image, its use is not without controversy. In a recent article in the Washington Post, a student from the Thomas Jefferson High School for Science and Technology in the US, Maddie Zug, suggested the school’s use of the image in her computer science course was evidence that the school’s culture unfairly marginalises women in an already male-dominated subject.

Maddie isn’t the only one to have taken offence or look for alternatives. In a 2013 paper by Deanna Needell and Rachel Ward, the authors got permission from the agent of Fabio Lanzoni to use the popular male model’s likeness rather than use Lena.

Fabio Test Image

The outrage over Lena is less about the intrinsic properties of the image itself, but rather about the image’s provenance. Maddie argued that by using the Lena image, women are turned away from computer science.

Yet Needell and Ward, two female researchers in this space, saw it as an opportunity to highlight gender issues in society at large by replacing the image with one of a male model instead.

Heidi Norton, a second year PhD student from the University of Pennsylvania, and co-founder of the website Beta Pleated Chic, which is devoted to women in STEM, has argued that the source of the image is due to the bygone era when academia was perceived as an “Old Boys’ Club”.

Norton says:

[…] in some ways, I felt like my strong negative reaction towards this image was unjustified […] I realised the (provenance) had nothing to do with the image itself. It had more to do with the fact that our culture historically (and often at present) values the beauty of women much more than their intelligence or talents.

It is accepted that all STEM fields need to attract more women into their ranks to achieve greater gender equality and diversity. However, the use of the Lena image is not an example of causation to correlation.

Disregarding the provenance of the Lena test image, we see that it is like many others within the SIPI database. The fact that the image is of an attractive woman should not weigh into this discussion for its use. Art in all of its many forms exists to capture beauty. Is it, therefore, not a logical conclusion that subjects of beauty, like Fabio and Lena, are going to turn up in our tests?

Computer Science Lecturer Hannah Dee, from Aberystwyth University, summed the issue up perfectly when she wrote for the the Software and Sustainability Institute in March of 2014:

[…] despite my avowedly feminist stance, I’m somehow unable to get that annoyed about [Lena].

The fact that there’s a historic Playboy image at pretty much every conference I go to, and on the walls of my colleague’s labs, and downloaded with every single image processing library I use, well… on the one hand, it’s part of that drip-drip-drip of strangeness that comes from working in a male-dominated field, where the topics of conversation and the general attitude can be a little disconcerting. But on the other hand, with changing cultural attitudes, and the effect the internet has had on pornography, the entire centrefold (yes, you can easily find it online if you look) seems very tame indeed by today’s standards. And the crop that is used in image processing research is, well… I’ve developed quite an affection for the picture. It’s one of the quirks of computer science. So when I was asked what picture we should use to illustrate this blog post, there was only one choice.

But is it appropriate?

Still, the moral issue remains: did the Jefferson High School for Science and Technology do anything wrong when they asked students to Google the Lena image and use it to test the students algorithms? Potentially.

Given the ease with which a simple Google image search could yield nudity, perhaps in future the school should simply direct link to the image in the SIPI image database. This way it will shield the students from accidentally accessing something they shouldn’t and will also provide them with several images to test their algorithms on. Something I am sure even Maddie would appreciate.

Should the field in general stop using the Lena image? My personal view is: no. The use of the Lena test image is a quirk of the industry that should be celebrated. That being said, it should be used alongside others equally. Blue Steel anyone?


Warning: searching the internet for “Lena Söderberg” or the “Lenna image” may yield results that are not safe for work.

The Conversation

Richard Matthews, Research Assistant in Digital Forensics, University of Adelaide

This article was originally published on The Conversation. Read the original article.