首页 > 分享 > 深度学习实战(1):花的分类任务|附数据集与源码

深度学习实战(1):花的分类任务|附数据集与源码

写在前面:

实验目的:通过建立Alexnet神经网络建立模型并根据训练数据来训练模型 以达到可以将一张花的类别进行分类

Python版本:Python3

IDE:VSCode

系统:MacOS

数据集以及代码的资源放在文章末尾了 有需要请自取~

7c816bfef6df46ffaff198b783bdf8df.jpeg

目录

写在前面:

前言

数据集 

训练模型代码 (附有注释)

训练集数据量展示

训练迭代过程展示

训练结果 Accuracy展示 

训练结果 Loss展示 

测试集 

预测结果代码 

预测结果展示

结语 

前言

本文仅作为学习训练 不涉及任何商业用途 如有错误或不足之处还请指出

数据集 

数据集一共有五种花的类别 但本次实验模型仅用了rose和sunflower两种类别进行分类测试

五种花的类别:

ef7bd1871b4e4fef987e59dbac0b9d75.png

 Rose:

b36a72916e6c438790b79f90724a6e1e.png

Sunflower: 

97c2dd12a3a249c6ba758daa7f01fb93.png

训练模型代码 (附有注释)

import os , glob

from sklearn.model_selection import train_test_split

import tensorflow as tf

from tensorflow import keras

from tensorflow.keras import layers

import matplotlib.pyplot as plt

resize = 224

epochs = 8

batch_size = 5

train_data_path = '/Users/liqun/Desktop/KS/MyPython/DataSet/flowers/Training'

rose_path = os.path.join(train_data_path,'rose')

sunflower_path = os.path.join(train_data_path,'sunflower')

fpath_rose = [os.path.abspath(fp) for fp in glob.glob(os.path.join(rose_path,'*.jpg'))]

fpath_sunflower = [os.path.abspath(fp) for fp in glob.glob(os.path.join(sunflower_path,'*.jpg'))]

num_rose = len(fpath_rose)

num_sunflower = len(fpath_sunflower)

label_rose = [0] * num_rose

label_sunflower = [1] * num_sunflower

print('rose: ', num_rose)

print('sunflower: ', num_sunflower)

RATIO_TEST = 0.1

num_rose_test = int(num_rose * RATIO_TEST)

num_sunflower_test = int(num_sunflower * RATIO_TEST)

fpath_train = fpath_rose[num_rose_test:] + fpath_sunflower[num_sunflower_test:]

label_train = label_rose[num_rose_test:] + label_sunflower[num_sunflower_test:]

fpath_vali = fpath_rose[:num_rose_test] + fpath_sunflower[:num_sunflower_test]

label_vali = label_rose[:num_rose_test] + label_sunflower[:num_sunflower_test]

num_train = len(fpath_train)

num_vali = len(fpath_vali)

print('num_train: ', num_train)

print('num_label: ', num_vali)

def preproc(fpath, label):

image_byte = tf.io.read_file(fpath)

image = tf.io.decode_image(image_byte)

image_resize = tf.image.resize_with_pad(image, 224, 224)

image_norm = tf.cast(image_resize, tf.float32) / 255.

label_onehot = tf.one_hot(label, 2)

return image_norm, label_onehot

dataset_train = tf.data.Dataset.from_tensor_slices((fpath_train, label_train))

dataset_train = dataset_train.shuffle(num_train).repeat()

dataset_train = dataset_train.map(preproc, num_parallel_calls=tf.data.experimental.AUTOTUNE)

dataset_train = dataset_train.batch(batch_size).prefetch(tf.data.experimental.AUTOTUNE)

dataset_vali = tf.data.Dataset.from_tensor_slices((fpath_vali, label_vali))

dataset_vali = dataset_vali.shuffle(num_vali).repeat()

dataset_vali = dataset_vali.map(preproc, num_parallel_calls=tf.data.experimental.AUTOTUNE)

dataset_vali = dataset_vali.batch(batch_size).prefetch(tf.data.experimental.AUTOTUNE)

model = tf.keras.Sequential(name='Alexnet')

model.add(layers.Conv2D(filters=96, kernel_size=(11,11),

strides=(4,4), padding='valid',

input_shape=(resize,resize,3),

activation='relu'))

model.add(layers.BatchNormalization())

model.add(layers.MaxPooling2D(pool_size=(3,3),

strides=(2,2),

padding='valid'))

model.add(layers.Conv2D(filters=256, kernel_size=(5,5),

strides=(1,1), padding='same',

activation='relu'))

model.add(layers.BatchNormalization())

model.add(layers.MaxPooling2D(pool_size=(3,3),

strides=(2,2),

padding='valid'))

model.add(layers.Conv2D(filters=384, kernel_size=(3,3),

strides=(1,1), padding='same',

activation='relu'))

model.add(layers.Conv2D(filters=384, kernel_size=(3,3),

strides=(1,1), padding='same',

activation='relu'))

model.add(layers.Conv2D(filters=256, kernel_size=(3,3),

strides=(1,1), padding='same',

activation='relu'))

model.add(layers.MaxPooling2D(pool_size=(3,3),

strides=(2,2), padding='valid'))

model.add(layers.Flatten())

model.add(layers.Dense(4096, activation='relu'))

model.add(layers.Dropout(0.5))

model.add(layers.Dense(4096, activation='relu'))

model.add(layers.Dropout(0.5))

model.add(layers.Dense(1000, activation='relu'))

model.add(layers.Dropout(0.5))

model.add(layers.Dense(2, activation='softmax'))

model.compile(loss='categorical_crossentropy',

optimizer='sgd',

metrics=['accuracy'])

history = model.fit(dataset_train,

steps_per_epoch = num_train//batch_size,

epochs = epochs,

validation_data = dataset_vali,

validation_steps = num_vali//batch_size,

verbose = 1)

scores_train = model.evaluate(dataset_train, steps=num_train//batch_size, verbose=1)

print(scores_train)

scores_vali = model.evaluate(dataset_vali, steps=num_vali//batch_size, verbose=1)

print(scores_vali)

model.save('/Users/liqun/Desktop/KS/MyPython/project/flowerModel.h5')

'''

history对象的history内容(history.history)是字典类型,

键的内容受metrics的设置影响,值的长度与epochs值一致。

'''

history_dict = history.history

train_loss = history_dict['loss']

train_accuracy = history_dict['accuracy']

val_loss = history_dict['val_loss']

val_accuracy = history_dict['val_accuracy']

plt.figure()

plt.plot(range(epochs), train_loss, label='train_loss')

plt.plot(range(epochs), val_loss, label='val_loss')

plt.legend()

plt.xlabel('epochs')

plt.ylabel('loss')

plt.figure()

plt.plot(range(epochs), train_accuracy, label='train_accuracy')

plt.plot(range(epochs), val_accuracy, label='val_accuracy')

plt.legend()

plt.xlabel('epochs')

plt.ylabel('accuracy')

plt.show()

print('Train has finished')

训练集数据量展示

37b8fbcfa1b4469da5bc0e765aae37c3.png

训练迭代过程展示

d0b8fa7455ae4dfca9e0ba490c082bbe.png

训练结果 Accuracy展示 

a15235ed4e89401590efac46dd386ab5.png

训练结果 Loss展示 

be572936b42245c185eb03abe669868f.png

测试集 

e67a567ecf0947d6a48173e928512315.pngab17e899d7b247e29831e1ec0727b8d5.png

预测结果代码 

import cv2

from tensorflow.keras.models import load_model

resize = 224

label = ('rose', 'sunflower')

image = cv2.resize(cv2.imread('/Users/liqun/Desktop/KS/MyPython/DataSet/flowers/Training/sunflower/23286304156_3635f7de05.jpg'),(resize,resize))

image = image.astype("float") / 255.0

image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))

model = load_model('/Users/liqun/Desktop/KS/MyPython/project/flowerModel.h5')

predict = model.predict(image)

i = predict.argmax(axis=1)[0]

print('——————————————————————')

print('Predict result')

print(label[i],':',max(predict[0])*100,'%')

预测结果展示

7608b44135a5481f9937a01f3ae073aa.png

结语 

模型到这里就训练并检测完毕了 如有需要的小伙伴可以下载下方的数据集测试集及源代码

链接: https://pan.baidu.com/s/1OJfwcF1PvX9qkZwT7MXd_Q?pwd=i0bt 提取码: i0bt

如果我的文章对你有帮助 麻烦点个赞再走呀 

相关知识

基于深度学习和迁移学习的识花实践
基于深度学习的花卉识别(附数据与代码)
深度学习花的分类识别
程序员面试、算法研究、机器学习、大模型/ChatGPT/AIGC、论文审稿、具身智能、RAG等11大系列集锦
flower花朵识别数据集
基于keras框架的MobileNetV3深度学习神经网络花卉/花朵分类识别系统源码
python利用c4.5决策树对鸢尾花卉数据集进行分类(iris)
深度学习花朵识别系统的设计与实现
深度学习机器学习卷积神经网络的花卉识别花种类识别
深度学习简单网络VGG鲜花分类

网址: 深度学习实战(1):花的分类任务|附数据集与源码 https://m.huajiangbk.com/newsview110482.html

所属分类:花卉
上一篇: 郁金香的生长与繁殖(郁金香的品种
下一篇: 郁金香泡沫教会你如何炒作:一支郁