The machine learning consulting project helped the JibJab electronic card platform overcome technical barriers and launch a product line.
The Los Angeles-based company has traditionally offered customers a simple interface and cutting tool to upload photos, take oval-shaped head images, and place them in personalized digital content. This method worked well enough, but JibJab launched a new proposal – a physical coffee book that required higher images.
Matt Sielekki, vice president of engineering at JibJab, began researching mechanical engineering (ML) as a way to improve image quality. She was dealing with pre-prepared models, but with limitations. For example, models built as academic projects cannot be used commercially.
Regardless, Mission Representative, a cloud-based cloud service provider in Los Angeles, contacted Cielecki to inform him about his AI services. He noted that the mission had helped the e-card company years ago in the AWS migration project and cost optimization. This history and timely availability paved the way for a new initiative: JibJab hired Mission in 2021 to build an ML algorithm from scratch to cut the image.
The mission built the first version of the algorithm and ML model in nine weeks. But since then the algorithm has gone through numerous iterations. The algorithm testing and model development phase is underway.
“We are distributing it to some users to get feedback and we are starting to see good results,” Sielecki said. “We think we have room for improvement, but we will definitely offer a better product.”
Training and improvement
Mission’s task was to study an ML computer vision algorithm that could identify faces in uploaded photos, cut a person’s full face and human hair correctly, and ignore all background elements. The first goal was to achieve an 85% accuracy with an ML-based image cutting technique.
In the first step toward this goal, Mission used two annotation tools – LabelMe and Amazon SageMaker Ground Truth – to label images and create a data set for the algorithm to study. The mission used data augmentation techniques, such as adding blur, illumination, and rotation, to expand the data set shown from 1,000 images to 17,000.
Next, Mission Detectron2 used Facebook AI Research, which works in SageMaker, to detect objects inside images and perform sample segmentation. The latter went beyond the obvious, using a more granular approach that helped to more accurately determine the shape of a specific object – in the case of a face, face and hair.
Improving the accuracy of this process included training and retraining the ML model. After Mission placed a model in JibJab’s educational data set, the company worked it out on new image data. This process revealed side effects – cases in which the algorithm was less than fully identifying faces and hair. The mission modified a set of training data to be used to revise the model.
This aspect of the initiative brought home the important role of data engineering in artificial intelligence. “It changed my perspective on what an AI project really is,” Sielecki said. “It’s less encryption and more data problems.”
Ryan Rees, who leads the Mission Machine Learning, Analysis and Learning Experience, noted the importance of a diverse set of training data. The study process revealed unexpected problems, such as bright light that washes off part of a person’s face or long flowing hair that resists precise cutting.
Determining why the model failed in certain data sets was a way to improve. Reese described the investigation process as a study, “Why is this an advanced case and why does the algorithm do the job and how do I retrain the data set?”
Current use and future plans
JibJab currently uses the Mission icon cutting method on the Starring You Books product line. A client who wants to create an individual book uploads an image that is stored in an AWS S3 bucket. The AWS computer vision platform, Recognition, identifies the face in the image and conducts an image quality test, including whether the image of the face is large enough to achieve a good result.
Matt Sieleki Vice President for Engineering, JibJab
If the image passes, the Detectron2 model performs segmentation of the samples to zero on the face and hair of the final image. The final steps are post-image processing and image placement in the individual book.
The Mission image cutting algorithm is 90% accurate. With the current improvement, the Mission and its client have raised the target to 95%. As the model improves, JibJab will discuss how the image cutting method can be adapted to other product lines.
“Our main goal is to get a model in a place that can produce a print head print for all of our users and record all the contours and unique shapes of their faces,” Sielecki said. “We have additional training and optimization to implement the model on a large scale before it can be prepared for everyone.”