AI-Powered Virtual Makeup Try-On
1. Introduction
The AI-Powered Virtual Makeup Try-On system allows users to virtually apply makeup and augment facial features in real-time. This project combines computer vision and augmented reality to create an interactive application for the beauty industry. Users can visualize different makeup styles and colors before applying them physically.
2. Prerequisites
• Python: Install Python 3.x from the official Python
website.
• Required Libraries:
- opencv-python: Install using pip
install opencv-python
- dlib: Install using pip install dlib
- numpy: Install using pip install
numpy
- mediapipe: Install using pip install
mediapipe
• Pre-trained facial landmark detection models or Mediapipe's Face Mesh module.
• Webcam or camera-enabled device for real-time application.
3. Project Setup
1. Create a Project Directory:
- Name your project folder, e.g., `Virtual_Makeup_TryOn`.
- Inside this folder, create the Python script file (`virtual_makeup.py`).
2. Install Required Libraries:
Ensure OpenCV, dlib, and Mediapipe are installed using `pip`.
4. Writing the Code
Below is an example code snippet for the Virtual Makeup Try-On system:
import cv2
import mediapipe as mp
import numpy as np
# Initialize Mediapipe Face Mesh
mp_face_mesh = mp.solutions.face_mesh
face_mesh = mp_face_mesh.FaceMesh(static_image_mode=False, max_num_faces=1,
refine_landmarks=True)
# Function to apply lipstick color
def apply_lipstick(frame, landmarks, color):
lips_upper = [61, 62, 63, 64, 65, 66,
67, 0, 37, 39, 40, 185, 184, 61]
lips_lower = [0, 78, 95, 88, 178, 87,
14, 317, 402, 310, 311, 312, 13]
points_upper =
np.array([[int(landmarks[pt][0]), int(landmarks[pt][1])] for pt in lips_upper],
np.int32)
points_lower =
np.array([[int(landmarks[pt][0]), int(landmarks[pt][1])] for pt in lips_lower],
np.int32)
cv2.fillPoly(frame, [points_upper],
color)
cv2.fillPoly(frame, [points_lower],
color)
# Main function for real-time virtual makeup application
def main():
cap = cv2.VideoCapture(0)
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
frame = cv2.flip(frame, 1)
rgb_frame = cv2.cvtColor(frame,
cv2.COLOR_BGR2RGB)
result =
face_mesh.process(rgb_frame)
if result.multi_face_landmarks:
for face_landmarks in
result.multi_face_landmarks:
h, w, _ = frame.shape
landmarks = [[int(lm.x *
w), int(lm.y * h)] for lm in face_landmarks.landmark]
apply_lipstick(frame,
landmarks, (0, 0, 255)) # Red lipstick
cv2.imshow('Virtual Makeup
Try-On', frame)
if cv2.waitKey(1) & 0xFF ==
ord('q'):
break
cap.release()
cv2.destroyAllWindows()
if __name__ == "__main__":
main()
5. Key Components
• Facial Landmark Detection: Identifies key facial features
using Mediapipe's Face Mesh module.
• Makeup Augmentation: Applies virtual makeup effects such as lipstick and
blush to the detected landmarks.
• Real-Time Processing: Captures and processes live video feeds for an
interactive experience.
6. Testing
1. Ensure the Mediapipe library is correctly installed and configured.
2. Run the script:
python virtual_makeup.py
3. Verify the application of makeup effects in the video feed.
7. Enhancements
• Additional Effects: Add eye shadow, blush, and foundation
effects.
• User Interface: Create a GUI for users to select makeup styles and colors.
• Real-Time Recommendations: Suggest makeup styles based on facial features.
8. Troubleshooting
• Detection Issues: Ensure proper lighting and clear video
input.
• Model Errors: Verify the integrity of Mediapipe installation.
• Performance Lag: Optimize the code or reduce input resolution.
9. Conclusion
The AI-Powered Virtual Makeup Try-On project demonstrates how augmented reality can transform the beauty industry. This system provides a practical and interactive platform for users to experiment with different makeup styles.