google colab

pull/579/head
Thankgod20 2024-08-16 04:47:09 +01:00
parent 4324b41b9e
commit 789b861504
10 changed files with 1929 additions and 63 deletions

1
.gitignore vendored
View File

@ -17,6 +17,7 @@ temp/
venv/ venv/
env/ env/
workflow/ workflow/
deepfake/
gfpgan/ gfpgan/
models/inswapper_128.onnx models/inswapper_128.onnx
models/GFPGANv1.4.pth models/GFPGANv1.4.pth

View File

@ -1,7 +1,7 @@
![demo-gif](demo.gif) ![demo-gif](demo.gif)
## Disclaimer ## Disclaimer
This software is meant to be a productive contribution to the rapidly growing AI-generated media industry. It will help artists with tasks such as animating a custom character or using the character as a model for clothing etc. This software is meant to be a productive contribution to the rapidly growing AI-generated media industry. It will help artists with tasks such as animating a custom character or using the character as a model for clothing etc.
The developers of this software are aware of its possible unethical applications and are committed to take preventative measures against them. It has a built-in check which prevents the program from working on inappropriate media including but not limited to nudity, graphic content, sensitive material such as war footage etc. We will continue to develop this project in the positive direction while adhering to law and ethics. This project may be shut down or include watermarks on the output if requested by law. The developers of this software are aware of its possible unethical applications and are committed to take preventative measures against them. It has a built-in check which prevents the program from working on inappropriate media including but not limited to nudity, graphic content, sensitive material such as war footage etc. We will continue to develop this project in the positive direction while adhering to law and ethics. This project may be shut down or include watermarks on the output if requested by law.
@ -10,39 +10,44 @@ Users of this software are expected to use this software responsibly while abidi
## How do I install it? ## How do I install it?
### Basic: It is more likely to work on your computer but it will also be very slow. You can follow instructions for the basic install (This usually runs via **CPU**) ### Basic: It is more likely to work on your computer but it will also be very slow. You can follow instructions for the basic install (This usually runs via **CPU**)
#### 1.Setup your platform #### 1.Setup your platform
- python (3.10 recommended) - python (3.10 recommended)
- pip - pip
- git - git
- [ffmpeg](https://www.youtube.com/watch?v=OlNWCpFdVMA) - [ffmpeg](https://www.youtube.com/watch?v=OlNWCpFdVMA)
- [visual studio 2022 runtimes (windows)](https://visualstudio.microsoft.com/visual-cpp-build-tools/) - [visual studio 2022 runtimes (windows)](https://visualstudio.microsoft.com/visual-cpp-build-tools/)
#### 2. Clone Repository #### 2. Clone Repository
https://github.com/hacksider/Deep-Live-Cam.git https://github.com/hacksider/Deep-Live-Cam.git
#### 3. Download Models #### 3. Download Models
1. [GFPGANv1.4](https://huggingface.co/hacksider/deep-live-cam/resolve/main/GFPGANv1.4.pth) 1. [GFPGANv1.4](https://huggingface.co/hacksider/deep-live-cam/resolve/main/GFPGANv1.4.pth)
2. [inswapper_128_fp16.onnx](https://huggingface.co/hacksider/deep-live-cam/resolve/main/inswapper_128_fp16.onnx) 2. [inswapper_128_fp16.onnx](https://huggingface.co/hacksider/deep-live-cam/resolve/main/inswapper_128_fp16.onnx)
Then put those 2 files on the "**models**" folder Then put those 2 files on the "**models**" folder
#### 4. Install dependency #### 4. Install dependency
We highly recommend to work with a `venv` to avoid issues. We highly recommend to work with a `venv` to avoid issues.
``` ```
pip install -r requirements.txt pip install -r requirements.txt
``` ```
##### DONE!!! If you dont have any GPU, You should be able to run roop using `python run.py` command. Keep in mind that while running the program for first time, it will download some models which can take time depending on your network connection. ##### DONE!!! If you dont have any GPU, You should be able to run roop using `python run.py` command. Keep in mind that while running the program for first time, it will download some models which can take time depending on your network connection.
### *Proceed if you want to use GPU Acceleration ### \*Proceed if you want to use GPU Acceleration
### CUDA Execution Provider (Nvidia)*
### CUDA Execution Provider (Nvidia)\*
1. Install [CUDA Toolkit 11.8](https://developer.nvidia.com/cuda-11-8-0-download-archive) 1. Install [CUDA Toolkit 11.8](https://developer.nvidia.com/cuda-11-8-0-download-archive)
2. Install dependencies: 2. Install dependencies:
``` ```
pip uninstall onnxruntime onnxruntime-gpu pip uninstall onnxruntime onnxruntime-gpu
pip install onnxruntime-gpu==1.16.3 pip install onnxruntime-gpu==1.16.3
@ -124,6 +129,7 @@ python run.py --execution-provider openvino
``` ```
## How do I use it? ## How do I use it?
> Note: When you run this program for the first time, it will download some models ~300MB in size. > Note: When you run this program for the first time, it will download some models ~300MB in size.
Executing `python run.py` command will launch this window: Executing `python run.py` command will launch this window:
@ -132,16 +138,39 @@ Executing `python run.py` command will launch this window:
Choose a face (image with desired face) and the target image/video (image/video in which you want to replace the face) and click on `Start`. Open file explorer and navigate to the directory you select your output to be in. You will find a directory named `<video_title>` where you can see the frames being swapped in realtime. Once the processing is done, it will create the output file. That's it. Choose a face (image with desired face) and the target image/video (image/video in which you want to replace the face) and click on `Start`. Open file explorer and navigate to the directory you select your output to be in. You will find a directory named `<video_title>` where you can see the frames being swapped in realtime. Once the processing is done, it will create the output file. That's it.
## For the webcam mode ## For the webcam mode
Just follow the clicks on the screenshot Just follow the clicks on the screenshot
1. Select a face 1. Select a face
2. Click live 2. Click live
3. Wait for a few seconds (it takes a longer time, usually 10 to 30 seconds before the preview shows up) 3. Wait for a few seconds (it takes a longer time, usually 10 to 30 seconds before the preview shows up)
![demo-gif](demo.gif) ![demo-gif](demo.gif)
Just use your favorite screencapture to stream like OBS ## TO USE WITH GOOGLE COLAB
> Note: In case you want to change your face, just select another picture, the preview mode will then restart (so just wait a bit).
1. Upload the colab file in /google-colab/DeepLive_Google_Colab.ipynb to your google colab
2. Follow the instructions to run the colab file.
Note: For TCP Tunneling you can use either Ngrok or FRP.
i. For Ngrok you would need an api key and payment details added to use the tcp connection
ii. For FRP its free but you need to host the FRPS server on a VPS and replace '194.113.64.71' with your vps address
3. Follow the image instruction to input the tcp address appropriately
On Colab
![gui-demo](colab_tcp_tunnel.png)
On your Machine
![gui-demo](colab_instruction.png)
colab_instruction.png
4. For live Mode add the source image, enable Remote Processor,Fill the tcp correctly and click on live
5. For Image swap add the source and target image, enable Remote Processor, click on preview
Just use your favorite screencapture to stream like OBS
> Note: In case you want to change your face, just select another picture, the preview mode will then restart (so just wait a bit).
Additional command line arguments are given below. To learn out what they do, check [this guide](https://github.com/s0md3v/roop/wiki/Advanced-Options). Additional command line arguments are given below. To learn out what they do, check [this guide](https://github.com/s0md3v/roop/wiki/Advanced-Options).
@ -167,6 +196,7 @@ options:
Looking for a CLI mode? Using the -s/--source argument will make the run program in cli mode. Looking for a CLI mode? Using the -s/--source argument will make the run program in cli mode.
## Want the Next Update Now? ## Want the Next Update Now?
If you want the latest and greatest build, or want to see some new great features, go to our [experimental branch](https://github.com/hacksider/Deep-Live-Cam/tree/experimental) and experience what the contributors have given. If you want the latest and greatest build, or want to see some new great features, go to our [experimental branch](https://github.com/hacksider/Deep-Live-Cam/tree/experimental) and experience what the contributors have given.
## Credits ## Credits

BIN
colab_demo.mov 100644

Binary file not shown.

File diff suppressed because it is too large Load Diff

View File

@ -34,7 +34,7 @@ def parse_args() -> None:
program.add_argument('-s', '--source', help='select an source image', dest='source_path') program.add_argument('-s', '--source', help='select an source image', dest='source_path')
program.add_argument('-t', '--target', help='select an target image or video', dest='target_path') program.add_argument('-t', '--target', help='select an target image or video', dest='target_path')
program.add_argument('-o', '--output', help='select output file or directory', dest='output_path') program.add_argument('-o', '--output', help='select output file or directory', dest='output_path')
program.add_argument('--frame-processor', help='pipeline of frame processors', dest='frame_processor', default=['face_swapper'], choices=['face_swapper', 'face_enhancer'], nargs='+') program.add_argument('--frame-processor', help='pipeline of frame processors', dest='frame_processor', default=['face_swapper'], choices=['face_swapper', 'face_enhancer','remote_processor'], nargs='+')
program.add_argument('--keep-fps', help='keep original fps', dest='keep_fps', action='store_true', default=False) program.add_argument('--keep-fps', help='keep original fps', dest='keep_fps', action='store_true', default=False)
program.add_argument('--keep-audio', help='keep original audio', dest='keep_audio', action='store_true', default=True) program.add_argument('--keep-audio', help='keep original audio', dest='keep_audio', action='store_true', default=True)
program.add_argument('--keep-frames', help='keep temporary frames', dest='keep_frames', action='store_true', default=False) program.add_argument('--keep-frames', help='keep temporary frames', dest='keep_frames', action='store_true', default=False)
@ -74,7 +74,11 @@ def parse_args() -> None:
modules.globals.fp_ui['face_enhancer'] = True modules.globals.fp_ui['face_enhancer'] = True
else: else:
modules.globals.fp_ui['face_enhancer'] = False modules.globals.fp_ui['face_enhancer'] = False
#for Remote tumbler:
if 'remote_processor' in args.frame_processor:
modules.globals.fp_ui['remote_processor'] = True
else:
modules.globals.fp_ui['remote_processor'] = False
modules.globals.nsfw = False modules.globals.nsfw = False
# translate deprecated args # translate deprecated args

View File

@ -28,3 +28,6 @@ fp_ui: Dict[str, bool] = {}
nsfw = None nsfw = None
camera_input_combobox = None camera_input_combobox = None
webcam_preview_running = False webcam_preview_running = False
push_addr = None
pull_addr = None
push_addr_two = None

View File

@ -1,3 +1,3 @@
name = 'Deep Live Cam' name = 'Deep Live Cam'
version = '1.3.0' version = '1.3.1'
edition = 'Portable' edition = 'Portable-Colab'

View File

@ -0,0 +1,243 @@
import zmq
import cv2
import modules.globals
import numpy as np
import threading
import time
import io
from tqdm import tqdm
from modules.typing import Face, Frame
from typing import Any,List
from modules.core import update_status
from modules.utilities import conditional_download, resolve_relative_path, is_image, is_video
import zlib
import subprocess
from cv2 import VideoCapture
import queue
NAME = 'DLC.REMOTE-PROCESSOR'
context = zmq.Context()
# Socket to send messages on
def push_socket(address) -> zmq.Socket:
sender_sock = context.socket(zmq.REQ)
sender_sock.connect(address)
return sender_sock
def pull_socket(address) -> zmq.Socket:
sender_sock = context.socket(zmq.REP)
sender_sock.connect(address)
return sender_sock
def pre_check() -> bool:
if not modules.globals.push_addr and not modules.globals.pull_addr:
return False
return True
def pre_start() -> bool:
if not is_image(modules.globals.target_path) and not is_video(modules.globals.target_path):
update_status('Select an image or video for target path.', NAME)
return False
return True
def stream_frame(temp_frame: Frame,stream_out: subprocess.Popen[bytes],stream_in: subprocess.Popen[bytes]) -> Frame:
temp_framex = swap_face_remote(temp_frame,stream_out,stream_in)
return temp_framex
def process_frame(source_frame: Frame, temp_frame: Frame)-> Frame:
temp_framex = swap_frame_face_remote(source_frame,temp_frame)
return temp_framex
def send_data(sender: zmq.Socket, face_bytes: bytes, metadata: dict, address: str) -> None:
chunk_size = 1024*100
total_chunk = len(face_bytes) // chunk_size + 1
new_metadata = {'total_chunk': total_chunk}
metadata.update(new_metadata)
# Send metadata first
sender.send_json(metadata)
# Wait for acknowledgment for metadata
ack = sender.recv_string()
with tqdm(total=total_chunk, desc="Sending chunks", unit="chunk") as pbar:
for i in range(total_chunk):
chunk = face_bytes[i * chunk_size:(i + 1) * chunk_size]
# Send the chunk
sender.send(chunk)
# Wait for acknowledgment after sending each chunk
ack = sender.recv_string()
pbar.set_postfix_str(f'Chunk {i + 1}/{total_chunk} ack: {ack}')
pbar.update(1)
# Send a final message to indicate all chunks are sent
sender.send(b"END")
# Wait for the final reply
final_reply_message = sender.recv_string()
print(f"Received final reply: {final_reply_message}")
def send_source_frame(source_face: Frame)-> None:
sender = push_socket(modules.globals.push_addr)
source_face_bytes = source_face.tobytes()
metadata = {
'manyface':(modules.globals.many_faces),
'dtype_source':str(source_face.dtype),
'shape_source':source_face.shape,
'size':'640x480',
'fps':'60'
#'shape_temp':temp_frame.shape
}
send_data(sender, source_face_bytes, metadata,modules.globals.push_addr)
def send_temp_frame(temp_face: Frame)-> None:
sender = push_socket(modules.globals.push_addr_two)
source_face_bytes = temp_face.tobytes()
metadata = {
'manyface':(modules.globals.many_faces),
'dtype_temp':str(temp_face.dtype),
'shape_temp':temp_face.shape,
#'shape_temp':temp_frame.shape
}
send_data(sender, source_face_bytes, metadata,modules.globals.push_addr)
def receive_processed_frame(output_queue: queue.Queue)-> None:
while True:
pull_socket_ = pull_socket(modules.globals.pull_addr)
meta_data_json = pull_socket_.recv_json()
print(meta_data_json)
total_chunk = meta_data_json['total_chunk']
# Send acknowledgment for metadata
pull_socket_.send_string("ACK")
# Receive the array bytes
source_array_bytes =b''
with tqdm(total=total_chunk, desc="Receiving chunks", unit="chunk") as pbar:
for i in range(total_chunk):
chunk = pull_socket_.recv()
source_array_bytes += chunk
pull_socket_.send_string(f"ACK {i + 1}/{total_chunk}")
pbar.set_postfix_str(f'Chunk {i + 1}/{total_chunk}')
pbar.update(1)
end_message = pull_socket_.recv()
if end_message == b"END":
pull_socket_.send_string("Final ACK")
# Deserialize the bytes back to an ndarray
source_array = np.frombuffer(source_array_bytes, dtype=np.dtype(meta_data_json['dtype_source'])).reshape(meta_data_json['shape_source'])
output_queue.put(source_array)
break
def send_streams(cap: VideoCapture) -> subprocess.Popen[bytes]:
ffmpeg_command = [
'ffmpeg',
'-f', 'rawvideo',
'-pix_fmt', 'bgr24',
'-s', f"{int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))}x{int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))}",
'-r', str(int(cap.get(cv2.CAP_PROP_FPS))),
'-i', '-',
'-c:v', 'libx264',
'-preset', 'ultrafast',
'-tune', 'zerolatency',
'-fflags', 'nobuffer',
'-flags', 'low_delay',
'-rtbufsize', '100M',
'-f', 'mpegts', modules.globals.push_addr_two #'tcp://127.0.0.1:5552'
]
ffmpeg_process = subprocess.Popen(ffmpeg_command, stdin=subprocess.PIPE)
return ffmpeg_process
def recieve_streams(cap: VideoCapture)->subprocess.Popen[bytes]:
ffmpeg_command_recie = [
'ffmpeg',
'-i',modules.globals.pull_addr, #'tcp://127.0.0.1:5553',
'-f','rawvideo',
'-pix_fmt','bgr24',
'-s','960x540',#'640x480',
'pipe:1'
]
ffmpeg_process_com = subprocess.Popen(ffmpeg_command_recie, stdout=subprocess.PIPE)
return ffmpeg_process_com
def write_to_stdin(queue: queue.Queue, stream_out: subprocess.Popen):
temp_frame = queue.get()
temp_frame_bytes = temp_frame.tobytes()
stream_out.stdin.write(temp_frame_bytes)
def read_from_stdout(queue: queue.Queue, stream_in: subprocess.Popen, output_queue: queue.Queue):
raw_frame = stream_in.stdout.read(960 * 540 * 3)
frame = np.frombuffer(raw_frame, dtype=np.uint8).reshape((540, 960, 3))
output_queue.put(frame)
def swap_face_remote(temp_frame: Frame,stream_out:subprocess.Popen[bytes],stream_in: subprocess.Popen[bytes]) -> Frame:
input_queue = queue.Queue()
output_queue = queue.Queue()
# Start threads for stdin and stdout
write_thread = threading.Thread(target=write_to_stdin, args=(input_queue, stream_out))
read_thread = threading.Thread(target=read_from_stdout, args=(input_queue, stream_in, output_queue))
write_thread.start()
read_thread.start()
# Send the frame to the stdin thread
input_queue.put(temp_frame)
# Wait for the processed frame from the stdout thread
processed_frame = output_queue.get()
# Stop the threads
input_queue.put(None)
write_thread.join()
read_thread.join()
return processed_frame
def swap_frame_face_remote(source_frame: Frame,temp_frame: Frame) -> Frame:
#input_queue = queue.Queue()
output_queue = queue.Queue()
# Start threads for stdin and stdout
write_thread = threading.Thread(target=send_source_frame, args=(source_frame,))
write_thread_tw = threading.Thread(target=send_temp_frame, args=(temp_frame,))
read_thread_ = threading.Thread(target=receive_processed_frame, args=(output_queue,))
write_thread.start()
write_thread_tw.start()
read_thread_.start()
# Send the frame to the stdin thread
# Wait for the processed frame from the stdout thread
processed_frame = output_queue.get()
# Stop the threads
write_thread.join()
write_thread_tw.join()
read_thread_.join()
return processed_frame
def process_frames(source_path: str, temp_frame_paths: List[str], progress: Any = None) -> None:
for temp_frame_path in temp_frame_paths:
temp_frame = cv2.imread(temp_frame_path)
result = process_frame(None, temp_frame)
cv2.imwrite(temp_frame_path, result)
if progress:
progress.update(1)
def process_image(source_path: str, target_path: str, output_path: str) -> None:
target_frame = cv2.imread(target_path)
result = process_frame(None, target_frame)
cv2.imwrite(output_path, result)
def process_video(source_path: str, temp_frame_paths: List[str]) -> None:
modules.processors.frame.core.process_video(None, temp_frame_paths, process_frames)

View File

@ -1,6 +1,8 @@
import sys
import os import os
import webbrowser import webbrowser
import customtkinter as ctk import customtkinter as ctk
import tkinter as tk
from typing import Callable, Tuple from typing import Callable, Tuple
import cv2 import cv2
from PIL import Image, ImageOps from PIL import Image, ImageOps
@ -14,7 +16,7 @@ from modules.utilities import is_image, is_video, resolve_relative_path
ROOT = None ROOT = None
ROOT_HEIGHT = 700 ROOT_HEIGHT = 700
ROOT_WIDTH = 600 ROOT_WIDTH = 800
PREVIEW = None PREVIEW = None
PREVIEW_MAX_HEIGHT = 700 PREVIEW_MAX_HEIGHT = 700
@ -62,47 +64,80 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
target_label.place(relx=0.6, rely=0.1, relwidth=0.3, relheight=0.25) target_label.place(relx=0.6, rely=0.1, relwidth=0.3, relheight=0.25)
select_face_button = ctk.CTkButton(root, text='Select a face', cursor='hand2', command=lambda: select_source_path()) select_face_button = ctk.CTkButton(root, text='Select a face', cursor='hand2', command=lambda: select_source_path())
select_face_button.place(relx=0.1, rely=0.4, relwidth=0.3, relheight=0.1) select_face_button.place(relx=0.1, rely=0.3, relwidth=0.3, relheight=0.1)
select_target_button = ctk.CTkButton(root, text='Select a target', cursor='hand2', command=lambda: select_target_path()) select_target_button = ctk.CTkButton(root, text='Select a target', cursor='hand2', command=lambda: select_target_path())
select_target_button.place(relx=0.6, rely=0.4, relwidth=0.3, relheight=0.1) select_target_button.place(relx=0.6, rely=0.3, relwidth=0.3, relheight=0.1)
keep_fps_value = ctk.BooleanVar(value=modules.globals.keep_fps) keep_fps_value = ctk.BooleanVar(value=modules.globals.keep_fps)
keep_fps_checkbox = ctk.CTkSwitch(root, text='Keep fps', variable=keep_fps_value, cursor='hand2', command=lambda: setattr(modules.globals, 'keep_fps', not modules.globals.keep_fps)) keep_fps_checkbox = ctk.CTkSwitch(root, text='Keep fps', variable=keep_fps_value, cursor='hand2', command=lambda: setattr(modules.globals, 'keep_fps', not modules.globals.keep_fps))
keep_fps_checkbox.place(relx=0.1, rely=0.6) keep_fps_checkbox.place(relx=0.1, rely=0.5)
keep_frames_value = ctk.BooleanVar(value=modules.globals.keep_frames) keep_frames_value = ctk.BooleanVar(value=modules.globals.keep_frames)
keep_frames_switch = ctk.CTkSwitch(root, text='Keep frames', variable=keep_frames_value, cursor='hand2', command=lambda: setattr(modules.globals, 'keep_frames', keep_frames_value.get())) keep_frames_switch = ctk.CTkSwitch(root, text='Keep frames', variable=keep_frames_value, cursor='hand2', command=lambda: setattr(modules.globals, 'keep_frames', keep_frames_value.get()))
keep_frames_switch.place(relx=0.1, rely=0.65) keep_frames_switch.place(relx=0.1, rely=0.55)
# for FRAME PROCESSOR ENHANCER tumbler: # for FRAME PROCESSOR ENHANCER tumbler:
enhancer_value = ctk.BooleanVar(value=modules.globals.fp_ui['face_enhancer']) enhancer_value = ctk.BooleanVar(value=modules.globals.fp_ui['face_enhancer'])
enhancer_switch = ctk.CTkSwitch(root, text='Face Enhancer', variable=enhancer_value, cursor='hand2', command=lambda: update_tumbler('face_enhancer',enhancer_value.get())) enhancer_switch = ctk.CTkSwitch(root, text='Face Enhancer', variable=enhancer_value, cursor='hand2', command=lambda: update_tumbler('face_enhancer',enhancer_value.get()))
enhancer_switch.place(relx=0.1, rely=0.7) enhancer_switch.place(relx=0.1, rely=0.6)
remote_process_value = ctk.BooleanVar(value=modules.globals.fp_ui['remote_processor'])
remote_process_switch = ctk.CTkSwitch(root, text='Remote Processor', variable=remote_process_value, cursor='hand2', command=lambda: update_tumbler('remote_processor',remote_process_value.get()))
remote_process_switch.place(relx=0.1, rely=0.65)
def on_text_change(event=None):
setattr(modules.globals, 'pull_addr', text_box_addr_in.get("1.0", tk.END).strip())
def on_text_change_out(event=None):
setattr(modules.globals, 'push_addr', text_box_addr_out.get("1.0", tk.END).strip())
def on_text_change_out_two(event=None):
setattr(modules.globals, 'push_addr_two', text_box_addr_out_t.get("1.0", tk.END).strip())
keep_audio_value = ctk.BooleanVar(value=modules.globals.keep_audio) keep_audio_value = ctk.BooleanVar(value=modules.globals.keep_audio)
keep_audio_switch = ctk.CTkSwitch(root, text='Keep audio', variable=keep_audio_value, cursor='hand2', command=lambda: setattr(modules.globals, 'keep_audio', keep_audio_value.get())) keep_audio_switch = ctk.CTkSwitch(root, text='Keep audio', variable=keep_audio_value, cursor='hand2', command=lambda: setattr(modules.globals, 'keep_audio', keep_audio_value.get()))
keep_audio_switch.place(relx=0.6, rely=0.6) keep_audio_switch.place(relx=0.6, rely=0.5)
many_faces_value = ctk.BooleanVar(value=modules.globals.many_faces) many_faces_value = ctk.BooleanVar(value=modules.globals.many_faces)
many_faces_switch = ctk.CTkSwitch(root, text='Many faces', variable=many_faces_value, cursor='hand2', command=lambda: setattr(modules.globals, 'many_faces', many_faces_value.get())) many_faces_switch = ctk.CTkSwitch(root, text='Many faces', variable=many_faces_value, cursor='hand2', command=lambda: setattr(modules.globals, 'many_faces', many_faces_value.get()))
many_faces_switch.place(relx=0.6, rely=0.65) many_faces_switch.place(relx=0.6, rely=0.55)
# nsfw_value = ctk.BooleanVar(value=modules.globals.nsfw) nsfw_value = ctk.BooleanVar(value=modules.globals.nsfw)
# nsfw_switch = ctk.CTkSwitch(root, text='NSFW', variable=nsfw_value, cursor='hand2', command=lambda: setattr(modules.globals, 'nsfw', nsfw_value.get())) nsfw_switch = ctk.CTkSwitch(root, text='NSFW', variable=nsfw_value, cursor='hand2', command=lambda: setattr(modules.globals, 'nsfw', nsfw_value.get()))
# nsfw_switch.place(relx=0.6, rely=0.7) nsfw_switch.place(relx=0.6, rely=0.6)
#Label a text box
label = ctk.CTkLabel(root, text="In:")
label.place(relx=0.1, rely=0.72, anchor=tk.E)
#Label a text box
label = ctk.CTkLabel(root, text="OutS:")
label.place(relx=0.6, rely=0.72, anchor=tk.E)
# Create a text box
text_box_addr_in = ctk.CTkTextbox(root, width=200, height=10)
text_box_addr_in.place(relx=0.1, rely=0.7)
text_box_addr_in.bind("<KeyRelease>", on_text_change)
# Create a text box
text_box_addr_out = ctk.CTkTextbox(root, width=100, height=10)
text_box_addr_out.place(relx=0.6, rely=0.7)
text_box_addr_out.bind("<KeyRelease>", on_text_change_out)
# Create a text box
text_box_addr_out_t = ctk.CTkTextbox(root, width=100, height=10)
text_box_addr_out_t.place(relx=0.8, rely=0.7)
text_box_addr_out_t.bind("<KeyRelease>", on_text_change_out_two)
start_button = ctk.CTkButton(root, text='Start', cursor='hand2', command=lambda: select_output_path(start)) start_button = ctk.CTkButton(root, text='Start', cursor='hand2', command=lambda: select_output_path(start))
start_button.place(relx=0.15, rely=0.80, relwidth=0.2, relheight=0.05) start_button.place(relx=0.15, rely=0.75, relwidth=0.2, relheight=0.05)
stop_button = ctk.CTkButton(root, text='Destroy', cursor='hand2', command=lambda: destroy()) stop_button = ctk.CTkButton(root, text='Destroy', cursor='hand2', command=lambda: destroy())
stop_button.place(relx=0.4, rely=0.80, relwidth=0.2, relheight=0.05) stop_button.place(relx=0.4, rely=0.75, relwidth=0.2, relheight=0.05)
preview_button = ctk.CTkButton(root, text='Preview', cursor='hand2', command=lambda: toggle_preview()) preview_button = ctk.CTkButton(root, text='Preview', cursor='hand2', command=lambda: toggle_preview())
preview_button.place(relx=0.65, rely=0.80, relwidth=0.2, relheight=0.05) preview_button.place(relx=0.65, rely=0.75, relwidth=0.2, relheight=0.05)
live_button = ctk.CTkButton(root, text='Live', cursor='hand2', command=lambda: webcam_preview()) live_button = ctk.CTkButton(root, text='Live', cursor='hand2', command=lambda: webcam_preview())
live_button.place(relx=0.40, rely=0.86, relwidth=0.2, relheight=0.05) live_button.place(relx=0.40, rely=0.85, relwidth=0.2, relheight=0.05)
status_label = ctk.CTkLabel(root, text=None, justify='center') status_label = ctk.CTkLabel(root, text=None, justify='center')
status_label.place(relx=0.1, rely=0.9, relwidth=0.8) status_label.place(relx=0.1, rely=0.9, relwidth=0.8)
@ -235,11 +270,24 @@ def init_preview() -> None:
def update_preview(frame_number: int = 0) -> None: def update_preview(frame_number: int = 0) -> None:
if modules.globals.source_path and modules.globals.target_path: if modules.globals.source_path and modules.globals.target_path:
temp_frame = get_video_frame(modules.globals.target_path, frame_number) temp_frame = get_video_frame(modules.globals.target_path, frame_number)
remote_process=False
if modules.globals.nsfw == False: if modules.globals.nsfw == False:
from modules.predicter import predict_frame from modules.predicter import predict_frame
if predict_frame(temp_frame): if predict_frame(temp_frame):
quit() quit()
for frame_processor in get_frame_processors_modules(modules.globals.frame_processors): for frame_processor in get_frame_processors_modules(modules.globals.frame_processors):
if 'remote_processor' in modules.globals.frame_processors :
remote_process = True
if frame_processor.__name__ =="modules.processors.frame.remote_processor":
print('------- Remote Process ----------')
source_data = cv2.imread(modules.globals.source_path)
if not frame_processor.pre_check():
print("No Input and Output Address")
sys.exit()
temp_frame=frame_processor.process_frame(source_data,temp_frame)
if not remote_process:
temp_frame = frame_processor.process_frame( temp_frame = frame_processor.process_frame(
get_one_face(cv2.imread(modules.globals.source_path)), get_one_face(cv2.imread(modules.globals.source_path)),
temp_frame temp_frame
@ -257,6 +305,7 @@ def webcam_preview():
global preview_label, PREVIEW global preview_label, PREVIEW
cap = cv2.VideoCapture(0) # Use index for the webcam (adjust the index accordingly if necessary) cap = cv2.VideoCapture(0) # Use index for the webcam (adjust the index accordingly if necessary)
cap.set(cv2.CAP_PROP_BUFFERSIZE,3)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 960) # Set the width of the resolution cap.set(cv2.CAP_PROP_FRAME_WIDTH, 960) # Set the width of the resolution
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 540) # Set the height of the resolution cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 540) # Set the height of the resolution
cap.set(cv2.CAP_PROP_FPS, 60) # Set the frame rate of the webcam cap.set(cv2.CAP_PROP_FPS, 60) # Set the frame rate of the webcam
@ -270,7 +319,21 @@ def webcam_preview():
frame_processors = get_frame_processors_modules(modules.globals.frame_processors) frame_processors = get_frame_processors_modules(modules.globals.frame_processors)
source_image = None # Initialize variable for the selected face image source_image = None # Initialize variable for the selected face image
remote_process=False # By default remote process is set to disabled
stream_out = None # Both veriable stores the subprocess runned by ffmpeg
stream_in = None
if 'remote_processor' in modules.globals.frame_processors :
remote_modules = get_frame_processors_modules(['remote_processor'])
source_data = cv2.imread(modules.globals.source_path)
remote_modules[1].send_source_frame(source_data)
#start ffmpeg stream out subprocess
stream_out = remote_modules[1].send_streams(cap)
#start ffmpeg stream In subprocess
stream_in = remote_modules[1].recieve_streams(cap)
remote_process = True
try:
while True: while True:
ret, frame = cap.read() ret, frame = cap.read()
if not ret: if not ret:
@ -283,8 +346,27 @@ def webcam_preview():
temp_frame = frame.copy() #Create a copy of the frame temp_frame = frame.copy() #Create a copy of the frame
for frame_processor in frame_processors: for frame_processor in frame_processors:
temp_frame = frame_processor.process_frame(source_image, temp_frame) if remote_process:
if frame_processor.__name__ =="modules.processors.frame.remote_processor":
#print('------- Remote Process ----------')
if not frame_processor.pre_check():
print("No Input and Output Address")
sys.exit()
_frame = frame_processor.stream_frame(temp_frame,stream_out,stream_in)
if _frame is not None:
temp_frame = _frame
if not remote_process:
temp_frame = frame_processor.process_frame(source_image, temp_frame)
if not remote_process:
image = cv2.cvtColor(temp_frame, cv2.COLOR_BGR2RGB) # Convert the image to RGB format to display it with Tkinter
image = Image.fromarray(image)
image = ImageOps.contain(image, (PREVIEW_MAX_WIDTH, PREVIEW_MAX_HEIGHT), Image.LANCZOS)
image = ctk.CTkImage(image, size=image.size)
preview_label.configure(image=image)
ROOT.update()
elif _frame is not None:
image = cv2.cvtColor(temp_frame, cv2.COLOR_BGR2RGB) # Convert the image to RGB format to display it with Tkinter image = cv2.cvtColor(temp_frame, cv2.COLOR_BGR2RGB) # Convert the image to RGB format to display it with Tkinter
image = Image.fromarray(image) image = Image.fromarray(image)
image = ImageOps.contain(image, (PREVIEW_MAX_WIDTH, PREVIEW_MAX_HEIGHT), Image.LANCZOS) image = ImageOps.contain(image, (PREVIEW_MAX_WIDTH, PREVIEW_MAX_HEIGHT), Image.LANCZOS)
@ -294,6 +376,6 @@ def webcam_preview():
if PREVIEW.state() == 'withdrawn': if PREVIEW.state() == 'withdrawn':
break break
finally:
cap.release() cap.release()
PREVIEW.withdraw() # Close preview window when loop is finished PREVIEW.withdraw() # Close preview window when loop is finished

View File

@ -21,3 +21,4 @@ opennsfw2==0.10.2
protobuf==4.23.2 protobuf==4.23.2
tqdm==4.66.4 tqdm==4.66.4
gfpgan==1.3.8 gfpgan==1.3.8
zmq==26.1.0