The "eyes" made available in modern technological sciences shatter any idea of passive vision; these prosthetic devices show us that all eyes, including our own organic ones, are active perceptual systems, building on translations and specific ways of seeing, that is, ways of life. There is no unmediated photograph or passive camera obscura in scientific accounts of bodies and machines; there are only highly specific visual possibilities, each with a wonderfully detailed, active, partial way of organizing worlds.

All these pictures of the world should not be allegories of infinite mobility and interchangeability but of elaborate specificity and difference and the loving care, people might take to learn how to see faithfully from another's point of view, even when the other is our own machine.

- Donna Haraway, Situated Knowledges 

Show and Tell: A Neural Image Caption Generator

Image caption generation models combine recent advances in computer vision and machine translation to produce realistic image captions using neural networks. Neural image caption models are trained to maximize the likelihood of producing a caption given an input image, and can be used to generate novel image descriptions. 

The Flickr30k data set was used on an adaptation of Google’s Show and Tell model. I used the TensorFlow framework to construct, train, and test the model. These photographs were chosen to show how historical context shapes the interpretation of scenes. There's a presumption that the “information” is there, and it’s only a matter of finding the right model to decode it. But more training on labeled images won’t prepare the system for something like a dead Syrian boy in the Aegean.

Source code: https://github.com/mlberkeley/oreilly-captions/tree/master/models 

NSFW Image Classification Neural Network by Yahoo

Detecting offensive / adult images is an important problem which researchers have tackled for decades. With the evolution of computer vision and deep learning, the algorithms have matured and we are now able to classify an image as not suitable for work with greater precision. "As far as we can tell, this is the only Machine Learning solution to detecting not suitable for work (NSFW) content. However, since NSFW content is highly subjective, we recommend testing the algorithm on your own images."

The network takes in an image and gives output a probability (score between 0-1) which can be used to filter not suitable for work images. Scores < 0.2 indicate that the image is likely to be safe with high probability. Scores > 0.8 indicate that the image is highly probable to be NSFW. Scores in middle range may be binned for different NSFW levels. The researchers didn’t publicise their datasets, due to the nature of the content. Is profanity measured by our Western standards? 

Source code: https://github.com/yahoo/open_nsfw

Gender classification

Gender classification using fer2013/IMDB datasets with a keras CNN model and openCV. Authors claim that they can achieved classification test accuracy: of 96%. But, who decides how gender is perceived, analysed and codified?

Source code: https://github.com/oarriaga/face_classification

Selected Works

proletariat.aiMachine intelligence research

Coded GazeMachine Intelligence Research, Visual design

FilisiaSocial Innovation, Entrepreneurship, Education

Feminist AlexaMachine Intelligence Research, Performance

Social DronesInteraction Design, Robotics

Augmented NatureRobotics, Ecology, Speculative Design

VerdeliseSocial Innovation, Circular Economy Design

Cognitive ExoticaArtificial Intelligence Research, Fiction

Floating LabDesign engineering workshop, Ecology