Compare commits

...

42 Commits

Author SHA1 Message Date
Mr.Tom 322f986e63
Merge 6d6d2fc748 into d6236a0eed 2024-12-04 18:00:39 +07:00
KRSHH d6236a0eed
Update README.md 2024-11-30 23:37:38 +05:30
KRSHH 6171141505
Detection Benchmarks 2024-11-17 23:53:04 +05:30
KRSHH 08adb53b8f
Add files via upload 2024-11-17 23:48:38 +05:30
Kenneth Estanislao 9e5446582e Merge branch 'main' of https://github.com/hacksider/Deep-Live-Cam 2024-11-17 22:24:04 +08:00
Kenneth Estanislao b9c7c0db6f Update .gitignore 2024-11-17 21:52:41 +08:00
Kenneth Estanislao cab8b9afcb
Update README.md 2024-11-14 19:47:35 +08:00
Kenneth Estanislao 4d8ba6396a
Merge pull request #773 from NeuroDonu/main
fix for GfpGAN and inswapper model path retrieval bug
2024-11-12 13:21:34 +08:00
NeuroDonu e4761e4d66
fix path for download and use model 2024-11-09 16:43:35 +03:00
NeuroDonu a840986159
fix path for model 2024-11-09 16:43:13 +03:00
KRSHH 4874282642
Making issue template mandatory 2024-11-08 23:21:30 +05:30
KRSHH 71c33437fc
Update bug_report.md 2024-11-02 12:59:33 +05:30
KRSHH a39b2e8d81
Update bug_report.md 2024-11-01 10:31:44 +05:30
KRSHH a7e775f918
Removed Link of a disabled repo
For avoiding ToS violation strike on this
2024-10-30 18:05:42 +05:30
KRSHH 5919995fa1
Update bug_report.md
Added this because of too many amateurs not following the obvious common steps before opening an issue.
2024-10-30 11:41:24 +05:30
Kenneth Estanislao 8746c9bd36 Update metadata.py
1.7
2024-10-30 00:25:06 +08:00
KRSHH 6a9ac5b70a
Merge pull request #743 from theogbob/patch-1
Fix ui.py
2024-10-27 10:33:53 +05:30
theogbob 916c2f82d8
Fix ui.py
Add command to "mouth_mask": modules.globals.mouth_mask which fixes the error "SyntaxError: invalid syntax. Perhaps you forgot a comma?"
2024-10-26 14:40:03 -04:00
KRSHH 80f6ea9e65
Save Mouth Mask Switch states 2024-10-26 17:54:45 +05:30
Kenneth Estanislao 9e24281a94
Delete media/mouth.gif 2024-10-26 14:32:16 +08:00
Kenneth Estanislao 82b527487a
Update README.md
ohhh... bad example during political times 😝
2024-10-26 14:31:24 +08:00
Kenneth Estanislao abde84ea57
Merge pull request #740 from KRSHH/main
BOUNTY: Mouth Mask Feature
2024-10-26 14:12:20 +08:00
KRSHH c599bb3e34 Mouth Masking Example 2024-10-25 22:47:53 +05:30
KRSHH 39db53abd6
Update README.md
Describes better.
2024-10-25 21:34:52 +05:30
KRSHH 29c9c119d3 Add Mouth Mask Feature 2024-10-25 20:59:30 +05:30
KRSHH fad626e84c Revert "Implement mouth mask"
This reverts commit 5ef255c3c3.
2024-10-25 20:55:21 +05:30
KRSHH 5ef255c3c3 Implement mouth mask 2024-10-25 20:53:31 +05:30
KRSHH 6f6f93a4ad
Added Links to Models in Instructions 2024-10-22 18:16:10 +05:30
KRSHH c75f941716
Removed Package Repetition 2024-10-22 17:24:06 +05:30
KRSHH e4af521592 Delete Media from main 2024-10-21 19:02:59 +05:30
KRSHH 6d40560c92
Add files via upload 2024-10-21 19:00:10 +05:30
KRSHH 570648efd0
Upload images to media folder 2024-10-21 18:56:36 +05:30
KRSHH 2dc429440e Shift Images to a folder 2024-10-21 18:50:07 +05:30
Kenneth Estanislao 240995bbe4
Update README.md 2024-10-21 16:14:39 +08:00
KRSHH fe8e54ddc1
Update README.md - Fix Text position 2024-10-20 22:37:30 +05:30
Kenneth Estanislao 1462ee9aeb
Update README.md
included instructions to watch movies in realtime!
2024-10-20 22:46:38 +08:00
hoangngoc5275 6d6d2fc748
Update python-package.yml
Signed-off-by: hoangngoc5275 <hoangngocngoc50@gmail.com>
2024-09-29 04:33:28 +07:00
hoangngoc5275 cb9d04933b
Merge branch 'hacksider:main' into main 2024-09-29 04:10:38 +07:00
looman loras 2edde04570 Merge branch 'main' of https://github.com/hoangngoc5275/Deepface
* 'main' of https://github.com/hoangngoc5275/Deepface:
  Create python-package.yml
2024-09-18 17:18:02 +07:00
hoangngoc5275 5da4632c15
Merge branch 'hacksider:main' into main 2024-09-17 11:16:08 +07:00
hoangngoc5275 0f65be8bb3
Merge branch 'hacksider:main' into main 2024-09-16 20:53:42 +07:00
hoangngoc5275 d0faf924d1
Create python-package.yml
Signed-off-by: hoangngoc5275 <hoangngocngoc50@gmail.com>
2024-09-14 15:03:47 +07:00
24 changed files with 602 additions and 95 deletions

View File

@ -1,38 +1,26 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
***[Remove this]The issue would be closed without notice and be considered spam if the template is not followed.***
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Error Message**
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
`<The error message in terminal>`
**Desktop (please complete the following information):**
- OS: [e.g. Windows]
- Version [e.g. 22]
- GPU
- CPU
**Additional context**
Add any other context about the problem here.
**Confirmation (Mandatory)**
- [ ] I have followed the template
- [ ] This is not a query about how to increase performance
- [ ] I have checked the issues page, and this is not a duplicate

View File

@ -0,0 +1,40 @@
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python
name: Python package
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
fail-fast: falses
matrix:
python-version: ["3.9", "3.10", "3.11"]
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
pytest

1
.gitignore vendored
View File

@ -24,3 +24,4 @@ models/GFPGANv1.4.pth
models/DMDNet.pth
faceswap/
.vscode/
switch_states.json

View File

@ -1,14 +1,19 @@
<h1 align="center">Deep Live Cam</h1>
<h1 align="center">Deep-Live-Cam</h1>
<p align="center">
Real-time face swap and video deepfake with a single click and only a single image.
</p>
<p align="center">
<img src="demo.gif" alt="Demo GIF">
<img src="avgpcperformancedemo.gif" alt="Performance Demo GIF">
<a href="https://trendshift.io/repositories/11395" target="_blank"><img src="https://trendshift.io/api/badge/repositories/11395" alt="hacksider%2FDeep-Live-Cam | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</p>
<p align="center">
<img src="media/demo.gif" alt="Demo GIF">
<img src="media/avgpcperformancedemo.gif" alt="Performance Demo GIF">
</p>
## Disclaimer
This software is intended as a productive contribution to the AI-generated media industry. It aims to assist artists with tasks like animating custom characters or using them as models for clothing, etc.
@ -20,17 +25,17 @@ Users are expected to use this software responsibly and legally. If using a real
## Quick Start (Windows / Nvidia)
[![Download](https://github.com/user-attachments/assets/3e3e252a-4bfa-41fb-a88c-84557402a7c7)](https://hacksider.gumroad.com/l/vccdmm)
[![Download](media/download.png)](https://hacksider.gumroad.com/l/vccdmm)
[Download latest pre-built version with CUDA support](https://hacksider.gumroad.com/l/vccdmm) - No Manual Installation/Downloading required.
[Download latest pre-built version with CUDA support](https://hacksider.gumroad.com/l/vccdmm) - No Manual Installation/Downloading required and Early features testing.
## Installation (Manual)
**Please be aware that the installation needs technical skills and is NOT for beginners, consider downloading the prebuilt. Please do NOT open platform and installation related issues on GitHub before discussing it on the discord server.**
### Basic Installation (CPU)
**Please be aware that the installation needs technical skills and is not for beginners, consider downloading the prebuilt.**
<details>
<summary>Click to see the process</summary>
### Installation
This is more likely to work on your computer but will be slower as it utilizes the CPU.
@ -72,10 +77,7 @@ brew install python-tk@3.10
**Run:** If you don't have a GPU, you can run Deep-Live-Cam using `python run.py`. Note that initial execution will download models (~300MB).
### GPU Acceleration (Optional)
<details>
<summary>Click to see the details</summary>
### GPU Acceleration
**CUDA Execution Provider (Nvidia)**
@ -159,38 +161,46 @@ python run.py --execution-provider openvino
- Use a screen capture tool like OBS to stream.
- To change the face, select a new source image.
![demo-gif](demo.gif)
## Features
### Resizable Preview Window
Dynamically improve performance using the `--live-resizable` parameter.
![resizable-gif](resizable.gif)
![resizable-gif](media/resizable.gif)
### Face Mapping
Track and change faces on the fly.
![face_mapping_source](face_mapping_source.gif)
![face_mapping_source](media/face_mapping_source.gif)
**Source Video:**
![face-mapping](face_mapping.png)
![face-mapping](media/face_mapping.png)
**Enable Face Mapping:**
![face-mapping2](face_mapping2.png)
![face-mapping2](media/face_mapping2.png)
**Map the Faces:**
![face_mapping_result](face_mapping_result.gif)
![face_mapping_result](media/face_mapping_result.gif)
**See the Magic!**
![movie](media/movie.gif)
## Command Line Arguments
**Watch movies in realtime with any face you want:**
It's as simple as opening a movie on the screen, and selecting OBS as your camera!
![image](media/movie_img.png)
### Benchmarks
On Deepware scanner - Most popular deepfake detection website, recording of realtime faceswap ran on an RTX 3060 -
![bench](media/deepwarebench.gif)
## Command Line Arguments (Unmaintained)
```
options:
@ -392,9 +402,9 @@ This is an open-source project developed in our free time. Updates may be delaye
- [GosuDRM](https://github.com/GosuDRM) : for open version of roop
- [pereiraroland26](https://github.com/pereiraroland26) : Multiple faces support
- [vic4key](https://github.com/vic4key) : For supporting/contributing on this project
- [KRSHH](https://github.com/KRSHH) : For updating the UI
- [KRSHH](https://github.com/KRSHH) : For his contributions
- and [all developers](https://github.com/hacksider/Deep-Live-Cam/graphs/contributors) behind libraries used in this project.
- Foot Note: [This is originally roop-cam, see the full history of the code here.](https://github.com/hacksider/roop-cam) Please be informed that the base author of the code is [s0md3v](https://github.com/s0md3v/roop)
- Foot Note: Please be informed that the base author of the code is [s0md3v](https://github.com/s0md3v/roop)
## Contributions
![Alt](https://repobeats.axiom.co/api/embed/fec8e29c45dfdb9c5916f3a7830e1249308d20e1.svg "Repobeats analytics image")

View File

Before

Width:  |  Height:  |  Size: 5.2 MiB

After

Width:  |  Height:  |  Size: 5.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.8 MiB

View File

Before

Width:  |  Height:  |  Size: 11 MiB

After

Width:  |  Height:  |  Size: 11 MiB

BIN
media/download.png 100644

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.0 KiB

View File

Before

Width:  |  Height:  |  Size: 76 KiB

After

Width:  |  Height:  |  Size: 76 KiB

View File

Before

Width:  |  Height:  |  Size: 104 KiB

After

Width:  |  Height:  |  Size: 104 KiB

View File

Before

Width:  |  Height:  |  Size: 4.0 MiB

After

Width:  |  Height:  |  Size: 4.0 MiB

View File

Before

Width:  |  Height:  |  Size: 8.6 MiB

After

Width:  |  Height:  |  Size: 8.6 MiB

View File

Before

Width:  |  Height:  |  Size: 73 KiB

After

Width:  |  Height:  |  Size: 73 KiB

BIN
media/movie.gif 100644

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 MiB

BIN
media/movie_img.png 100644

Binary file not shown.

After

Width:  |  Height:  |  Size: 794 KiB

View File

Before

Width:  |  Height:  |  Size: 4.3 MiB

After

Width:  |  Height:  |  Size: 4.3 MiB

View File

@ -1 +1,4 @@
just put the models in this folder
just put the models in this folder -
https://huggingface.co/hacksider/deep-live-cam/resolve/main/inswapper_128_fp16.onnx?download=true
https://github.com/TencentARC/GFPGAN/releases/download/v1.3.4/GFPGANv1.4.pth

View File

@ -36,3 +36,8 @@ fp_ui: Dict[str, bool] = {"face_enhancer": False}
camera_input_combobox = None
webcam_preview_running = False
show_fps = False
mouth_mask = False
show_mouth_mask_box = False
mask_feather_ratio = 8
mask_down_size = 0.50
mask_size = 1

View File

@ -1,3 +1,3 @@
name = 'Deep Live Cam'
version = '1.6.0'
version = '1.7.0'
edition = 'Portable'

View File

@ -11,7 +11,6 @@ from modules.face_analyser import get_one_face
from modules.typing import Frame, Face
from modules.utilities import (
conditional_download,
resolve_relative_path,
is_image,
is_video,
)
@ -21,9 +20,11 @@ THREAD_SEMAPHORE = threading.Semaphore()
THREAD_LOCK = threading.Lock()
NAME = "DLC.FACE-ENHANCER"
abs_dir = os.path.dirname(os.path.abspath(__file__))
models_dir = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(abs_dir))), 'models')
def pre_check() -> bool:
download_directory_path = resolve_relative_path("..\models")
download_directory_path = models_dir
conditional_download(
download_directory_path,
[
@ -47,11 +48,7 @@ def get_face_enhancer() -> Any:
with THREAD_LOCK:
if FACE_ENHANCER is None:
if os.name == "nt":
model_path = resolve_relative_path("..\models\GFPGANv1.4.pth")
# todo: set models path https://github.com/TencentARC/GFPGAN/issues/399
else:
model_path = resolve_relative_path("../models/GFPGANv1.4.pth")
model_path = os.path.join(models_dir, 'GFPGANv1.4.pth')
FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1) # type: ignore[attr-defined]
return FACE_ENHANCER

View File

@ -2,35 +2,51 @@ from typing import Any, List
import cv2
import insightface
import threading
import numpy as np
import modules.globals
import modules.processors.frame.core
from modules.core import update_status
from modules.face_analyser import get_one_face, get_many_faces, default_source_face
from modules.typing import Face, Frame
from modules.utilities import conditional_download, resolve_relative_path, is_image, is_video
from modules.utilities import (
conditional_download,
is_image,
is_video,
)
from modules.cluster_analysis import find_closest_centroid
import os
FACE_SWAPPER = None
THREAD_LOCK = threading.Lock()
NAME = 'DLC.FACE-SWAPPER'
NAME = "DLC.FACE-SWAPPER"
abs_dir = os.path.dirname(os.path.abspath(__file__))
models_dir = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(abs_dir))), 'models')
def pre_check() -> bool:
download_directory_path = resolve_relative_path('../models')
conditional_download(download_directory_path, ['https://huggingface.co/hacksider/deep-live-cam/blob/main/inswapper_128_fp16.onnx'])
download_directory_path = abs_dir
conditional_download(
download_directory_path,
[
"https://huggingface.co/hacksider/deep-live-cam/blob/main/inswapper_128_fp16.onnx"
],
)
return True
def pre_start() -> bool:
if not modules.globals.map_faces and not is_image(modules.globals.source_path):
update_status('Select an image for source path.', NAME)
update_status("Select an image for source path.", NAME)
return False
elif not modules.globals.map_faces and not get_one_face(cv2.imread(modules.globals.source_path)):
update_status('No face in source path detected.', NAME)
elif not modules.globals.map_faces and not get_one_face(
cv2.imread(modules.globals.source_path)
):
update_status("No face in source path detected.", NAME)
return False
if not is_image(modules.globals.target_path) and not is_video(modules.globals.target_path):
update_status('Select an image or video for target path.', NAME)
if not is_image(modules.globals.target_path) and not is_video(
modules.globals.target_path
):
update_status("Select an image or video for target path.", NAME)
return False
return True
@ -40,20 +56,48 @@ def get_face_swapper() -> Any:
with THREAD_LOCK:
if FACE_SWAPPER is None:
model_path = resolve_relative_path('../models/inswapper_128_fp16.onnx')
FACE_SWAPPER = insightface.model_zoo.get_model(model_path, providers=modules.globals.execution_providers)
model_path = os.path.join(models_dir, 'inswapper_128_fp16.onnx')
FACE_SWAPPER = insightface.model_zoo.get_model(
model_path, providers=modules.globals.execution_providers
)
return FACE_SWAPPER
def swap_face(source_face: Face, target_face: Face, temp_frame: Frame) -> Frame:
return get_face_swapper().get(temp_frame, target_face, source_face, paste_back=True)
face_swapper = get_face_swapper()
# Apply the face swap
swapped_frame = face_swapper.get(
temp_frame, target_face, source_face, paste_back=True
)
if modules.globals.mouth_mask:
# Create a mask for the target face
face_mask = create_face_mask(target_face, temp_frame)
# Create the mouth mask
mouth_mask, mouth_cutout, mouth_box, lower_lip_polygon = (
create_lower_mouth_mask(target_face, temp_frame)
)
# Apply the mouth area
swapped_frame = apply_mouth_area(
swapped_frame, mouth_cutout, mouth_box, face_mask, lower_lip_polygon
)
if modules.globals.show_mouth_mask_box:
mouth_mask_data = (mouth_mask, mouth_cutout, mouth_box, lower_lip_polygon)
swapped_frame = draw_mouth_mask_visualization(
swapped_frame, target_face, mouth_mask_data
)
return swapped_frame
def process_frame(source_face: Face, temp_frame: Frame) -> Frame:
# Ensure the frame is in RGB format if color correction is enabled
if modules.globals.color_correction:
temp_frame = cv2.cvtColor(temp_frame, cv2.COLOR_BGR2RGB)
if modules.globals.many_faces:
many_faces = get_many_faces(temp_frame)
if many_faces:
@ -71,35 +115,44 @@ def process_frame_v2(temp_frame: Frame, temp_frame_path: str = "") -> Frame:
if modules.globals.many_faces:
source_face = default_source_face()
for map in modules.globals.souce_target_map:
target_face = map['target']['face']
target_face = map["target"]["face"]
temp_frame = swap_face(source_face, target_face, temp_frame)
elif not modules.globals.many_faces:
for map in modules.globals.souce_target_map:
if "source" in map:
source_face = map['source']['face']
target_face = map['target']['face']
source_face = map["source"]["face"]
target_face = map["target"]["face"]
temp_frame = swap_face(source_face, target_face, temp_frame)
elif is_video(modules.globals.target_path):
if modules.globals.many_faces:
source_face = default_source_face()
for map in modules.globals.souce_target_map:
target_frame = [f for f in map['target_faces_in_frame'] if f['location'] == temp_frame_path]
target_frame = [
f
for f in map["target_faces_in_frame"]
if f["location"] == temp_frame_path
]
for frame in target_frame:
for target_face in frame['faces']:
for target_face in frame["faces"]:
temp_frame = swap_face(source_face, target_face, temp_frame)
elif not modules.globals.many_faces:
for map in modules.globals.souce_target_map:
if "source" in map:
target_frame = [f for f in map['target_faces_in_frame'] if f['location'] == temp_frame_path]
source_face = map['source']['face']
target_frame = [
f
for f in map["target_faces_in_frame"]
if f["location"] == temp_frame_path
]
source_face = map["source"]["face"]
for frame in target_frame:
for target_face in frame['faces']:
for target_face in frame["faces"]:
temp_frame = swap_face(source_face, target_face, temp_frame)
else:
detected_faces = get_many_faces(temp_frame)
if modules.globals.many_faces:
@ -110,25 +163,46 @@ def process_frame_v2(temp_frame: Frame, temp_frame_path: str = "") -> Frame:
elif not modules.globals.many_faces:
if detected_faces:
if len(detected_faces) <= len(modules.globals.simple_map['target_embeddings']):
if len(detected_faces) <= len(
modules.globals.simple_map["target_embeddings"]
):
for detected_face in detected_faces:
closest_centroid_index, _ = find_closest_centroid(modules.globals.simple_map['target_embeddings'], detected_face.normed_embedding)
closest_centroid_index, _ = find_closest_centroid(
modules.globals.simple_map["target_embeddings"],
detected_face.normed_embedding,
)
temp_frame = swap_face(modules.globals.simple_map['source_faces'][closest_centroid_index], detected_face, temp_frame)
temp_frame = swap_face(
modules.globals.simple_map["source_faces"][
closest_centroid_index
],
detected_face,
temp_frame,
)
else:
detected_faces_centroids = []
for face in detected_faces:
detected_faces_centroids.append(face.normed_embedding)
detected_faces_centroids.append(face.normed_embedding)
i = 0
for target_embedding in modules.globals.simple_map['target_embeddings']:
closest_centroid_index, _ = find_closest_centroid(detected_faces_centroids, target_embedding)
for target_embedding in modules.globals.simple_map[
"target_embeddings"
]:
closest_centroid_index, _ = find_closest_centroid(
detected_faces_centroids, target_embedding
)
temp_frame = swap_face(modules.globals.simple_map['source_faces'][i], detected_faces[closest_centroid_index], temp_frame)
temp_frame = swap_face(
modules.globals.simple_map["source_faces"][i],
detected_faces[closest_centroid_index],
temp_frame,
)
i += 1
return temp_frame
def process_frames(source_path: str, temp_frame_paths: List[str], progress: Any = None) -> None:
def process_frames(
source_path: str, temp_frame_paths: List[str], progress: Any = None
) -> None:
if not modules.globals.map_faces:
source_face = get_one_face(cv2.imread(source_path))
for temp_frame_path in temp_frame_paths:
@ -162,7 +236,9 @@ def process_image(source_path: str, target_path: str, output_path: str) -> None:
cv2.imwrite(output_path, result)
else:
if modules.globals.many_faces:
update_status('Many faces enabled. Using first source image. Progressing...', NAME)
update_status(
"Many faces enabled. Using first source image. Progressing...", NAME
)
target_frame = cv2.imread(output_path)
result = process_frame_v2(target_frame)
cv2.imwrite(output_path, result)
@ -170,5 +246,367 @@ def process_image(source_path: str, target_path: str, output_path: str) -> None:
def process_video(source_path: str, temp_frame_paths: List[str]) -> None:
if modules.globals.map_faces and modules.globals.many_faces:
update_status('Many faces enabled. Using first source image. Progressing...', NAME)
modules.processors.frame.core.process_video(source_path, temp_frame_paths, process_frames)
update_status(
"Many faces enabled. Using first source image. Progressing...", NAME
)
modules.processors.frame.core.process_video(
source_path, temp_frame_paths, process_frames
)
def create_lower_mouth_mask(
face: Face, frame: Frame
) -> (np.ndarray, np.ndarray, tuple, np.ndarray):
mask = np.zeros(frame.shape[:2], dtype=np.uint8)
mouth_cutout = None
landmarks = face.landmark_2d_106
if landmarks is not None:
# 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
lower_lip_order = [
65,
66,
62,
70,
69,
18,
19,
20,
21,
22,
23,
24,
0,
8,
7,
6,
5,
4,
3,
2,
65,
]
lower_lip_landmarks = landmarks[lower_lip_order].astype(
np.float32
) # Use float for precise calculations
# Calculate the center of the landmarks
center = np.mean(lower_lip_landmarks, axis=0)
# Expand the landmarks outward
expansion_factor = (
1 + modules.globals.mask_down_size
) # Adjust this for more or less expansion
expanded_landmarks = (lower_lip_landmarks - center) * expansion_factor + center
# Extend the top lip part
toplip_indices = [
20,
0,
1,
2,
3,
4,
5,
] # Indices for landmarks 2, 65, 66, 62, 70, 69, 18
toplip_extension = (
modules.globals.mask_size * 0.5
) # Adjust this factor to control the extension
for idx in toplip_indices:
direction = expanded_landmarks[idx] - center
direction = direction / np.linalg.norm(direction)
expanded_landmarks[idx] += direction * toplip_extension
# Extend the bottom part (chin area)
chin_indices = [
11,
12,
13,
14,
15,
16,
] # Indices for landmarks 21, 22, 23, 24, 0, 8
chin_extension = 2 * 0.2 # Adjust this factor to control the extension
for idx in chin_indices:
expanded_landmarks[idx][1] += (
expanded_landmarks[idx][1] - center[1]
) * chin_extension
# Convert back to integer coordinates
expanded_landmarks = expanded_landmarks.astype(np.int32)
# Calculate bounding box for the expanded lower mouth
min_x, min_y = np.min(expanded_landmarks, axis=0)
max_x, max_y = np.max(expanded_landmarks, axis=0)
# Add some padding to the bounding box
padding = int((max_x - min_x) * 0.1) # 10% padding
min_x = max(0, min_x - padding)
min_y = max(0, min_y - padding)
max_x = min(frame.shape[1], max_x + padding)
max_y = min(frame.shape[0], max_y + padding)
# Ensure the bounding box dimensions are valid
if max_x <= min_x or max_y <= min_y:
if (max_x - min_x) <= 1:
max_x = min_x + 1
if (max_y - min_y) <= 1:
max_y = min_y + 1
# Create the mask
mask_roi = np.zeros((max_y - min_y, max_x - min_x), dtype=np.uint8)
cv2.fillPoly(mask_roi, [expanded_landmarks - [min_x, min_y]], 255)
# Apply Gaussian blur to soften the mask edges
mask_roi = cv2.GaussianBlur(mask_roi, (15, 15), 5)
# Place the mask ROI in the full-sized mask
mask[min_y:max_y, min_x:max_x] = mask_roi
# Extract the masked area from the frame
mouth_cutout = frame[min_y:max_y, min_x:max_x].copy()
# Return the expanded lower lip polygon in original frame coordinates
lower_lip_polygon = expanded_landmarks
return mask, mouth_cutout, (min_x, min_y, max_x, max_y), lower_lip_polygon
def draw_mouth_mask_visualization(
frame: Frame, face: Face, mouth_mask_data: tuple
) -> Frame:
landmarks = face.landmark_2d_106
if landmarks is not None and mouth_mask_data is not None:
mask, mouth_cutout, (min_x, min_y, max_x, max_y), lower_lip_polygon = (
mouth_mask_data
)
vis_frame = frame.copy()
# Ensure coordinates are within frame bounds
height, width = vis_frame.shape[:2]
min_x, min_y = max(0, min_x), max(0, min_y)
max_x, max_y = min(width, max_x), min(height, max_y)
# Adjust mask to match the region size
mask_region = mask[0 : max_y - min_y, 0 : max_x - min_x]
# Remove the color mask overlay
# color_mask = cv2.applyColorMap((mask_region * 255).astype(np.uint8), cv2.COLORMAP_JET)
# Ensure shapes match before blending
vis_region = vis_frame[min_y:max_y, min_x:max_x]
# Remove blending with color_mask
# if vis_region.shape[:2] == color_mask.shape[:2]:
# blended = cv2.addWeighted(vis_region, 0.7, color_mask, 0.3, 0)
# vis_frame[min_y:max_y, min_x:max_x] = blended
# Draw the lower lip polygon
cv2.polylines(vis_frame, [lower_lip_polygon], True, (0, 255, 0), 2)
# Remove the red box
# cv2.rectangle(vis_frame, (min_x, min_y), (max_x, max_y), (0, 0, 255), 2)
# Visualize the feathered mask
feather_amount = max(
1,
min(
30,
(max_x - min_x) // modules.globals.mask_feather_ratio,
(max_y - min_y) // modules.globals.mask_feather_ratio,
),
)
# Ensure kernel size is odd
kernel_size = 2 * feather_amount + 1
feathered_mask = cv2.GaussianBlur(
mask_region.astype(float), (kernel_size, kernel_size), 0
)
feathered_mask = (feathered_mask / feathered_mask.max() * 255).astype(np.uint8)
# Remove the feathered mask color overlay
# color_feathered_mask = cv2.applyColorMap(feathered_mask, cv2.COLORMAP_VIRIDIS)
# Ensure shapes match before blending feathered mask
# if vis_region.shape == color_feathered_mask.shape:
# blended_feathered = cv2.addWeighted(vis_region, 0.7, color_feathered_mask, 0.3, 0)
# vis_frame[min_y:max_y, min_x:max_x] = blended_feathered
# Add labels
cv2.putText(
vis_frame,
"Lower Mouth Mask",
(min_x, min_y - 10),
cv2.FONT_HERSHEY_SIMPLEX,
0.5,
(255, 255, 255),
1,
)
cv2.putText(
vis_frame,
"Feathered Mask",
(min_x, max_y + 20),
cv2.FONT_HERSHEY_SIMPLEX,
0.5,
(255, 255, 255),
1,
)
return vis_frame
return frame
def apply_mouth_area(
frame: np.ndarray,
mouth_cutout: np.ndarray,
mouth_box: tuple,
face_mask: np.ndarray,
mouth_polygon: np.ndarray,
) -> np.ndarray:
min_x, min_y, max_x, max_y = mouth_box
box_width = max_x - min_x
box_height = max_y - min_y
if (
mouth_cutout is None
or box_width is None
or box_height is None
or face_mask is None
or mouth_polygon is None
):
return frame
try:
resized_mouth_cutout = cv2.resize(mouth_cutout, (box_width, box_height))
roi = frame[min_y:max_y, min_x:max_x]
if roi.shape != resized_mouth_cutout.shape:
resized_mouth_cutout = cv2.resize(
resized_mouth_cutout, (roi.shape[1], roi.shape[0])
)
color_corrected_mouth = apply_color_transfer(resized_mouth_cutout, roi)
# Use the provided mouth polygon to create the mask
polygon_mask = np.zeros(roi.shape[:2], dtype=np.uint8)
adjusted_polygon = mouth_polygon - [min_x, min_y]
cv2.fillPoly(polygon_mask, [adjusted_polygon], 255)
# Apply feathering to the polygon mask
feather_amount = min(
30,
box_width // modules.globals.mask_feather_ratio,
box_height // modules.globals.mask_feather_ratio,
)
feathered_mask = cv2.GaussianBlur(
polygon_mask.astype(float), (0, 0), feather_amount
)
feathered_mask = feathered_mask / feathered_mask.max()
face_mask_roi = face_mask[min_y:max_y, min_x:max_x]
combined_mask = feathered_mask * (face_mask_roi / 255.0)
combined_mask = combined_mask[:, :, np.newaxis]
blended = (
color_corrected_mouth * combined_mask + roi * (1 - combined_mask)
).astype(np.uint8)
# Apply face mask to blended result
face_mask_3channel = (
np.repeat(face_mask_roi[:, :, np.newaxis], 3, axis=2) / 255.0
)
final_blend = blended * face_mask_3channel + roi * (1 - face_mask_3channel)
frame[min_y:max_y, min_x:max_x] = final_blend.astype(np.uint8)
except Exception as e:
pass
return frame
def create_face_mask(face: Face, frame: Frame) -> np.ndarray:
mask = np.zeros(frame.shape[:2], dtype=np.uint8)
landmarks = face.landmark_2d_106
if landmarks is not None:
# Convert landmarks to int32
landmarks = landmarks.astype(np.int32)
# Extract facial features
right_side_face = landmarks[0:16]
left_side_face = landmarks[17:32]
right_eye = landmarks[33:42]
right_eye_brow = landmarks[43:51]
left_eye = landmarks[87:96]
left_eye_brow = landmarks[97:105]
# Calculate forehead extension
right_eyebrow_top = np.min(right_eye_brow[:, 1])
left_eyebrow_top = np.min(left_eye_brow[:, 1])
eyebrow_top = min(right_eyebrow_top, left_eyebrow_top)
face_top = np.min([right_side_face[0, 1], left_side_face[-1, 1]])
forehead_height = face_top - eyebrow_top
extended_forehead_height = int(forehead_height * 5.0) # Extend by 50%
# Create forehead points
forehead_left = right_side_face[0].copy()
forehead_right = left_side_face[-1].copy()
forehead_left[1] -= extended_forehead_height
forehead_right[1] -= extended_forehead_height
# Combine all points to create the face outline
face_outline = np.vstack(
[
[forehead_left],
right_side_face,
left_side_face[
::-1
], # Reverse left side to create a continuous outline
[forehead_right],
]
)
# Calculate padding
padding = int(
np.linalg.norm(right_side_face[0] - left_side_face[-1]) * 0.05
) # 5% of face width
# Create a slightly larger convex hull for padding
hull = cv2.convexHull(face_outline)
hull_padded = []
for point in hull:
x, y = point[0]
center = np.mean(face_outline, axis=0)
direction = np.array([x, y]) - center
direction = direction / np.linalg.norm(direction)
padded_point = np.array([x, y]) + direction * padding
hull_padded.append(padded_point)
hull_padded = np.array(hull_padded, dtype=np.int32)
# Fill the padded convex hull
cv2.fillConvexPoly(mask, hull_padded, 255)
# Smooth the mask edges
mask = cv2.GaussianBlur(mask, (5, 5), 3)
return mask
def apply_color_transfer(source, target):
"""
Apply color transfer from target to source image
"""
source = cv2.cvtColor(source, cv2.COLOR_BGR2LAB).astype("float32")
target = cv2.cvtColor(target, cv2.COLOR_BGR2LAB).astype("float32")
source_mean, source_std = cv2.meanStdDev(source)
target_mean, target_std = cv2.meanStdDev(target)
# Reshape mean and std to be broadcastable
source_mean = source_mean.reshape(1, 1, 3)
source_std = source_std.reshape(1, 1, 3)
target_mean = target_mean.reshape(1, 1, 3)
target_std = target_std.reshape(1, 1, 3)
# Perform the color transfer
source = (source - source_mean) * (target_std / source_std) + target_mean
return cv2.cvtColor(np.clip(source, 0, 255).astype("uint8"), cv2.COLOR_LAB2BGR)

View File

@ -95,6 +95,8 @@ def save_switch_states():
"live_resizable": modules.globals.live_resizable,
"fp_ui": modules.globals.fp_ui,
"show_fps": modules.globals.show_fps,
"mouth_mask": modules.globals.mouth_mask,
"show_mouth_mask_box": modules.globals.show_mouth_mask_box
}
with open("switch_states.json", "w") as f:
json.dump(switch_states, f)
@ -115,6 +117,8 @@ def load_switch_states():
modules.globals.live_resizable = switch_states.get("live_resizable", False)
modules.globals.fp_ui = switch_states.get("fp_ui", {"face_enhancer": False})
modules.globals.show_fps = switch_states.get("show_fps", False)
modules.globals.mouth_mask = switch_states.get("mouth_mask", False)
modules.globals.show_mouth_mask_box = switch_states.get("show_mouth_mask_box", False)
except FileNotFoundError:
# If the file doesn't exist, use default values
pass
@ -269,6 +273,28 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
)
show_fps_switch.place(relx=0.6, rely=0.75)
mouth_mask_var = ctk.BooleanVar(value=modules.globals.mouth_mask)
mouth_mask_switch = ctk.CTkSwitch(
root,
text="Mouth Mask",
variable=mouth_mask_var,
cursor="hand2",
command=lambda: setattr(modules.globals, "mouth_mask", mouth_mask_var.get()),
)
mouth_mask_switch.place(relx=0.1, rely=0.55)
show_mouth_mask_box_var = ctk.BooleanVar(value=modules.globals.show_mouth_mask_box)
show_mouth_mask_box_switch = ctk.CTkSwitch(
root,
text="Show Mouth Mask Box",
variable=show_mouth_mask_box_var,
cursor="hand2",
command=lambda: setattr(
modules.globals, "show_mouth_mask_box", show_mouth_mask_box_var.get()
),
)
show_mouth_mask_box_switch.place(relx=0.6, rely=0.55)
start_button = ctk.CTkButton(
root, text="Start", cursor="hand2", command=lambda: analyze_target(start, root)
)

View File

@ -21,4 +21,3 @@ protobuf==4.23.2
tqdm==4.66.4
gfpgan==1.3.8
tkinterdnd2==0.4.2
customtkinter==5.2.2