Machine Learning using Convolutional Neural Networks

Machine Learning with Amazon SageMaker

Computers are generally programmed to do what the developer dictates and will only behave predictably under the specified scenarios.

In recent years, people are increasingly turning to computers to perform tasks that can’t be achieved with traditional programming, which previously had to be done by humans performing manual tasks.   Machine Learning gives computers the ability to ‘learn’ and act on information based on observations without being explicitly programmed.

TechConnect entered the recent Get 2 the Core challenge on Unearthed’s crowd sourcing platformThis is TechConnect’s story, as part of the crowd sourcing approach, and does not imply or assert in any way that Newcrest Mining endorse Amazon Web Services or the work TechConnect have performed in this challenge.

Business problem

Currently a team at Newcrest Mining manually crop photographs of drill core samples before the photos can be fed into a system which detects the material type. This is extremely time-consuming due to the large number of photos. Hence why Newcrest Mining used crowd sourcing via the Unearthed platform, a platform bringing data scientists, start-ups and the energy & natural resources industry together.

Being able to automatically identify bounding box co-ordinates of the samples within an image would save 80-90% of the time spent preparing the photos.

Input Image

Machine Learning using Convolutional Neural Networks

Expected Output Image

Machine Learning using Convolutional Neural Networks

 

Before we can begin implementing an object-detection process, we first need to address a variety of issues with the photographs themselves, being:

  • Not all photos are straight
  • Not all core trays are in a fixed position relative to the camera
  • Not all photos are taken perpendicular to the core trays introducing a perspective distortion
  • Not all photos are high-resolution

In addition to the object-classification, we need to use an image-classification process to classify each image into a group based on the factors above. The groups are defined as:

Group 0 – Core trays are positioned correctly in the images with no distortion. This is the ideal case
Group 1 – Core trays are misaligned in the image
Group 2 – Core trays have perspective distortion
Group 3 – Core trays are misaligned and have perspective distortion
Group 4 – The photo has a low aspect ratio
Group 5 – The photo has a low aspect ratio and are misaligned

CNN Image Detection with Amazon Sagemaker

Solution

We tried to solve this problem using Machine Learning. In particular, we used supervised learning. When conducting supervised learning the system is provided with the input data and the classification/label desired output for each data point. The system learns a model that when provided a previously seen input will reliably output the correct labelling or the most likely label when an unseen input is provided.

This differs from unsupervised learning. When utilising unsupervised techniques, the target label is unknown and the system must group or derive the label from the inherent properties within the data set itself.

The Supervised Machine Learning process works by:

  1. Obtaining, preparing & labelling the input data
  2. Create a model
  3. Train the model
  4. Test the model
  5. Deploy & use the model

There are many specific algorithms for supervised learning that are appropriate for different learning tasks. The object detection and classification problem of identifying core samples in images is particularly suited to a technique known as convolutional neural networks. The model ‘learns’ by assigning and constantly adjusting internal weights and biases for each input of the training data to produce the specified output. The weights and biases become more accurate with more training data.

Amazon SageMaker provides a hosted platform that enabled us to quickly build, train, test and deploy our model.

Newcrest Mining provided a large collection of their photographs which contain core samples. A large subset of the photos also contained the expected output, which we used to train our model.

The expected output is a set of four (X, Y) coordinates per core sample in the photograph. The coordinates represent the corners of the bounding box that surrounds the core sample. Multiple sets of coordinates are expected for photos that contain multiple core samples.

The Process

We uploaded the supplied data to an AWS S3 bucket, using a separate prefix to separate images which we were provided the expected output for, and those with no output. S3 is an ideal store for the raw images with high durability, infinite capacity and direct integration with many other AWS products.

We further randomly split the photos with the expected output into a training dataset (70%) and a testing dataset (30%).

We created a Jupyter notebook on an Amazon SageMaker notebook instance to host and execute our code. By default the Jupyter notebook instance provides access to a wide variety of common data science tools such as numpy, tensorflow and matplotlib in addition to the Amazon SageMaker and AWS python SDKs. This allowed us to immediately focus on our particular problem of creating SageMaker compatible datasets with which we could build and test our models.

We trained our model by feeding the training dataset along with the expected output into an existing Sagemaker built object detection model to fine tune it to our specific problem. SageMaker has a collection of hyperparameters which influence how the model ‘learns’. Adjusting the hyperparameter values affects the overall accuracy of the model and how long the training takes. As the training proceeded we were able to monitor the changes to the primary accuracy metric and pre-emptively cancel any training configurations that did not perform well. This saved us considerable time and money by allowing us to abort poor configurations early.

We then tested the accuracy of our model by feeding testing data – data it has never seen – without the output, then comparing the model’s output to the expected output.

After the first round of training we had our benchmark for accuracy. From there we were able to tune the model by iteratively adjusting the hyperparameters, model parameters and by augmenting the data set with additional examples then retraining and retesting. Setting the hyperparameter values is more of an artform than a science – trial and error is often the best way.

We used a technique which dynamically assigned values to the learning rate after each epoch, similar to a harmonic progression:

Harmonic Progression

This technique allowed us to start with large values to allow the model to converge quickly initially, then reduce the learning rate value by an increasingly smaller amount after each epoch as the model gets closer to an optimal solution.  After many iterations of tuning, training and testing we had improved the overall accuracy of the model compared with our benchmark, and with our project deadline fast approaching we decided that it was accurate as possible in the timeframe that we had.

We then used our model to classify and detect the objects in the remaining photographs that didn’t exist in the training set.  The following images show the bounding boxes around the cores that our model predicted:

CNN Bounding
CNN Bounding

Lessons Learned

Before we began we had an extremely high expectation of how accurate our model would be. In reality it wasn’t as accurate as our expectations.
We discussed things that could have made the model more accurate, train faster or both, including:

  • Tuning the hyperparameters using SageMakers automated hyperparameter tuning tooling
  • Copying the data across multiple regions to gain better access to the specific machine types we required for training
  • Increasing the size of the training dataset by:
    • Requesting more photographs
    • Duplicating the provided photographs and modifying them slightly. This included:
      • including duplicate copies of images and labels
      • including copies after converting the images to greyscale
      • including copies after changing the aspect ratio of the images
      • including copies after mirroring the images
  • Splitting the problem into separate, simpler machine learnable stages
  • Strategies for identifying the corners of the cores when they are not a rectangle in the image

During these discussions we realised we hadn’t defined a cut-off for when we would consider our model to be ‘accurate enough’.

As a general rule the accuracy of the models you build improve most rapidly in the first few iterations, after that the rate of improvement slows significantly. Each subsequent improvement requires lengthier training, more sophisticated algorithms and models, more sophisticated feature engineering or substantial changes to approach entirely. This trend is depicted in the following chart:

Learning accuracy over time

Depending on the use case, a model with an accuracy of 90% often requires significantly less training time, engineering effort and sophistication than a model with an accuracy of 93%. The acceptance criteria for a model needs to carefully balance these considerations to maximise the overall return on investment for the project.

In our case time was the factor that dictated when we stopped training and started using the model to produce the outputs for unseen photographs.

 

Thank you to the team at TechConnect that volunteered to try Amazon Sagemaker to address the Get 2 the Core Challenge posted by Newcrest Mining on the Unearthered portal.  Also big thanks for sharing lessons learned and putting this blog together!

Intensive Care Unit - Data Collection

Precision Medicine Data Platform

Recently TechConnect and IntelliHQ attended the eHealth Expo 2018. IntelliHQ are specialists in Machine Learning in the health space, and are the innovators behind the development of a cloud-based precision medicine data platform. TechConnect are IntelliHQ’s cloud technology partners, and our strong relationship with Amazon Web Services and the AWS life sciences team has enabled us to deliver the first steps towards building out the precision medicine data platform.

This video certainly sums up the goals of IntelliHQ and how TechConnect are partnering to deliver solutions in life sciences on the Amazon Web Services cloud platform.

Achieving this level of integration with the General Electric Carescape High Speed Data Interface is a first in Australia and potentially a first outside of America. TechConnect have designed a lightweight service to connect to the GE Carescape and push the high fidelity data to Amazon Kinesis Firehose and then to persisted cost effective storage on Amazon S3.

With the raw data stored on Amazon S3, data lake principles can be applied to enrich and process data for research and ultimately help save more lives in a proactive way. The diagram below shows a high level architecture that supports the data collection and machine learning capability inside the precision medicine data platform.

 

GE Carescape HSDI to Cloud Connector

This software, named Panacea, will be made available as an open source project.

Be sure to explore the following two sources of further information:

Check out Dr Brent Richards’ presentation at the recent eHealth Expo 2018 as well as a selection of other speakers located here.

AIkademi seeks to develop the capabilities of individuals, organisations and communities to embrace the opportunities emerging from machine learning.

AWS SAM Project

Using AWS SAM for a CORS Enabled Serverless API

Over the past two years TechConnect has had an increasing demand for creating ‘Serverless’ API backends, from scratch or converting existing services running on expensive virtual machines in AWS. This has been an iterative learning process for us and I feel many others in the industry. However, it feels like each month pioneers in the field answer our cries for help by creating or extending Open-source projects to make our ‘serverless’ lives a little easier.

There are quite a few options for creating serverless applications in AWS (Serverless Framework, Zappa, etc..). However, In this blog post, we will discuss using AWS SAM (Serverless Application Model, previously known as Project Flourish) to create a CORS enabled API. All templates and source code mentioned can be found in this GitHub repository. I heavily recommend having this open in another tab, along with the AWS SAM project.

AWS SAM Project

API Design First with Swagger

Code or Design first? One approach is not necessarily better than the other, but at TechConnect we’ve been focusing on a design first mentality when it comes to building APIs for our clients. We aren’t the users of the APIs we build and we aren’t the front-end developers who might build a website off of it. Instead our goal when creating an external API is to create a logical and human readable API contract specification. To achieve this we use Swagger, the Open API specification to build and document our RESTful backends.

In the image below, we have started to design a simple movie ratings API in YAML using the Open API specification. In its current state, it is just an API contract showing the requests and responses. However, it will be further modified to become an AWS API Gateway compatible and AWS Lambda integrated document in future steps.

Code Structure

Our API is a simple CRUD that will make use of Amazon DynamoDB to create, list and delete movie ratings of a given year. This could all easily reside in a single Python file, but instead we will split it up to make it a little more realistic for larger projects. As this is a small demo, we’ll be missing a few resources that would usually be included in a real project (tests, task runners, etc..), but try having a look at The Hitchhiker’s Guide to Python for a nice Python strucure for your own future APIs.


- template.yaml
- swagger.yaml
- requirements.txt
- movies
  - api
    - __init__.py
    - ratings.py

  - core
    - __init__.py
    - web.py
  - __init__.py

Our Python project movies contains two sub-packages; api and core. Our AWS Lambda handlers are located in api.ratings.py , where each handle will; process the request from API Gateway, interact with DynamoDB (using a table name set by an environment variable) and return an object to API Gateway.

movies.api.ratings.py

...
from movies.core import web

def get_ratings(event, context):
    ...
    return web.cors_web_response(200, ratings_list)

CORS in Lambda Responses

In the previous step you might have noticed we were using a function to build an integration response. The object body is serialized into a JSON string and the headers Access-Control-Allow-Headers, Access-Control-Allow-Methods and Access-Control-Allow-Origin are enabled for Cross-Origin Resource Sharing (CORS).

movies.core.web.py

def cors_web_response(status_code, body):
    return {
        'statusCode': status_code,
        "headers": {
            "Access-Control-Allow-Headers": 
                "Content-Type,Authorization,X-Amz-Date,X-Api-Key,X-Amz-Security-Token",
            "Access-Control-Allow-Methods": 
                "DELETE,GET,HEAD,OPTIONS,PATCH,POST,PUT",
            "Access-Control-Allow-Origin": 
                "*"
        },
        'body': json.dumps(body)
    }

CORS in Swagger

Previously in our Lambda code, we built CORS headers into our responses. However, this is only one half of the solution. Annoyingly we must add an OPTIONS HTTP method to every path level of our API. This is to satisfy the preflight request done by the client to check if CORS requests are enabled. Although it uses x-amazon-apigateway-integration, it is a mocked response by API Gateway. AWS Lambda is not needed to implement this.

swagger.yaml

paths:
  /ratings/{year}:
    options:
      tags:
      - "CORS"
      consumes:
      - application/json
      produces:
      - application/json
      responses:
        200:
          description: 200 response
          schema:
            $ref: "#/definitions/Empty"
          headers:
            Access-Control-Allow-Origin:
              type: string
            Access-Control-Allow-Methods:
              type: string
            Access-Control-Allow-Headers:
              type: string
      x-amazon-apigateway-integration:
        responses:
          default:
            statusCode: 200
            responseParameters:
              method.response.header.Access-Control-Allow-Methods: "'DELETE,GET,HEAD,OPTIONS,PATCH,POST,PUT'"
              method.response.header.Access-Control-Allow-Headers: "'Content-Type,Authorization,X-Amz-Date,X-Api-Key,X-Amz-Security-Token'"
              method.response.header.Access-Control-Allow-Origin: "'*'"
        passthroughBehavior: when_no_match
        requestTemplates:
          application/json: "{\"statusCode\": 200}"
        type: mock

Integrating with SAM

Since AWS SAM is an extension of CloudFormation, the syntax is almost identical. The snippets below show the integration between template.yaml and swagger.yaml. The AWS Lambda function GetRatings name is parsed into the API via a stage variable. swagger.yaml integrates the Lambda proxy using x-amazon-apigateway-integration. One important thing to note is that the Swagger document is not required to create an API Gateway resource in AWS SAM. However, we are using it due to our design first mentality and it being required for CORS preflight responses. The AWS SAM team are currently looking to reduce the need for this in CORS applications. Keep an eye out for the ongoing topic being discussed on GitHub.

template.yaml

AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Resources:
  ApiGatewayApi:
    Type: AWS::Serverless::Api
    Properties:
      DefinitionUri: swagger.yaml
      StageName: v1
      Variables:
        GetRatings: !Ref GetRatings
...
  GetRatings:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: ./build
      Handler: movies.api.ratings.get_ratings
      Role: !GetAtt CrudLambdaIAMRole.Arn
      Environment:
        Variables:
          RATINGS_TABLE: !Ref RatingsTable
      Events:
        GetRaidHandle:
          Type: Api
          Properties:
            RestApiId: !Ref ApiGatewayApi
            Path: /ratings/{year}
            Method: GET
...
swagger.yaml

paths:
  /ratings/{year}:
    get:
      ...
      x-amazon-apigateway-integration:
        responses:
          default:
            statusCode: 200
            responseParameters:
              method.response.header.Access-Control-Allow-Origin: "'*'"
        uri: arn:aws:apigateway:REGION:lambda:path/2015-03-31/functions/arn:aws:lambda:REGION:ACCOUNT_ID:function:${stageVariables.GetRatings}/invocations
        passthroughBehavior: when_no_match
        httpMethod: POST
        type: aws_proxy

Deploying SAM API

Now that all the resources are ready, the final step is to package and deploy the SAM application. You may have noticed in the template.yaml the source of the Lambda function was listed as ./build. Any AWS Lambda function that uses non-standard Python libraries will require them to be included in the deployment. To demonstrate this, we’ll send our code to a build folder and install the dependencies.


$ mkdir ./build
$ cp -p -r ./movies ./build/movies
$ pip install -r requirements.txt -t ./build

Finally, you will need to package your SAM deployment to convert it to a traditional AWS CloudFormation template. First your will need to make sure your own account id and desired region are used (using sed). You will also need to provide an existing S3 bucket to store the packaged code. If you inspect the template-out.yaml you will notice that the source of each AWS Lambda function in an object in S3. This is what is used by aws cloudformation deploy. One final tip is to remember to include --capabilities CAPABILITY_IAM in your deploy if you are creating any roles during your deployment.


$ sed -i "s/account_placeholder/AWS_ACCOUNT_ID/g" 'swagger.yaml'
$ sed -i "s/region_placeholder/AWS_REGION/g" 'swagger.yaml'
$ aws cloudformation package --template-file ./template.yaml --output-template-file ./template-out.yaml --s3-bucket YOUR_S3_BUCKET_NAME
$ aws cloudformation deploy --template-file template-out.yaml --stack-name MoviesAPI --capabilities CAPABILITY_IAM
AWS Lambda Specialty - Australia Partners

AWS Service Delivery Program for AWS Lambda

30 November 2016 – TechConnect IT Solutions, Making Your Cloud Journey a Success, announced today that it has achieved AWS Service Delivery Partner status for AWS Lambda.

The AWS Service Delivery Program is designed to highlight AWS Partner Network (APN) Partners who have a track record of delivering verified customer success for specific Amazon Web Services (AWS) products.

The AWS Service Delivery Program was recently launched to help AWS customers find qualified APN Partners that provide expertise in a specific service or skill area. To qualify, partners must pass service-specific verification of customer references and a technical review, meaning customers can be confident they are working with partners that provide recent and relevant experience.

AWS Lambda Partners provide services and tools that help customers build or migrate their solutions to a micro-services based serverless architecture, without the need to worry about provisioning or managing servers.

TechConnect, an Amazon Web Services Advanced Consulting Partner, is proud to participate in the AWS Service Delivery Program for AWS Lambda” said Mike Cunningham, CEO. “Our dynamic team assist organisations to deliver applications in the cloud using elastic serverless architectures. Applications built with no servers means a truly elastic and resilient architecture that grows with you.”

TechConnect build robust and secure serverless architectures with Amazon S3, Amazon Content Distribution Network, Amazon Route53, Amazon Certificate Manager, Amazon API Gateway, Amazon Lambda, Amazon RDS and\or Amazon DynamoDB.