What is Transfer Learning?

Machine learning algorithms are generally divided into three categories: supervised learning, unsupervised learning, and reinforcement learning. The Supervised learning algorithm is used to study the mapping function between input and output. The various possible outcomes are known, and the data used for training has been labeled as the correct answer. Meanwhile, unsupervised learning uses a training data set that is not marked to model the data structure and makes it more subjective.

Reinforcement learning is a method that is influenced by feedback from the environment with learning techniques that are iterative and adaptive, which are believed to approach human learning.
Meanwhile, Deep Learning is a branch of machine learning based on Artificial Neural Networks (ANN). Deep Learning has many layers that make up the stack. That layer is a high-level abstraction modeling algorithm on data with layered and deep non-linear transformation functions. Layers in deep learning can reach up to hundreds of layers. Deep learning has three layers: the input layer, the hidden layer, and the output layer.

It obtains high accuracy based on several previous studies regarding object classification using the Convolutional Neural Network (CNN) algorithm. So, the CNN Algorithm has proven to be very good for classifying image objects. In addition, using the MobileNetV2 architecture can also improve accuracy and overcome high network computing complexity and long data training times.

Sunpark, 2019

Neural networks have an excellent hierarchical structure, with general and unique features gradually acquired as the network deepens. Traditional machine learning methods require equal data source and target domain data distribution. Transfer learning is a technique that takes advantage of the similarities between the target domain and the source domain and uses the source domain as an archetype to accelerate learning in the target domain. This method reduces model training costs and significantly enhances machine learning effects. Therefore, transfer learning can help address some new application scenarios, enabling machine learning when there is not enough labeled data.

Read also : Introduction to Gamification and Video-Based Learning

Transfer learning methods often include training all parameters after loading pre-training weights, using only the last few layer parameters to be introduced after loading pre-training weights, adding a fully connected layer based on the original network, or only using the final fully connected layer and training after loading the pre-training weights.

Read also : Image Classification Using Transfer Learning and CNN

The preprocessing stage is the stage before data training and testing. At this stage, the dataset is standardized so that all images are even. At this stage, you can use an additional python library, ImageDataGenerator. Preprocessing that can be done includes dividing datasets, changing image sizes, and data augmentation. Dataset division is done to separate data that will be used to build models and test the models that have been created. Image pixel resizing is an important preprocessing step in computer vision to streamline training models. The smaller the image size, the better the model will run. Augmentation is the process of changing or modifying an image. Augmentation can multiply data so that the model can generalize well. A series of augmentation methods include flipping, cropping, enlarging, and rescaling.

Tinggalkan Balasan

Alamat email Anda tidak akan dipublikasikan. Ruas yang wajib ditandai *