755 lines
26 KiB
HTML
755 lines
26 KiB
HTML
|
<!DOCTYPE html>
|
|||
|
<html xmlns="http://www.w3.org/1999/xhtml" lang xml:lang>
|
|||
|
<head>
|
|||
|
<meta charset="utf-8" />
|
|||
|
<meta name="generator" content="pandoc" />
|
|||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
|
|||
|
<title>WGU Capstone User Guide</title>
|
|||
|
<style>
|
|||
|
code{white-space: pre-wrap;}
|
|||
|
span.smallcaps{font-variant: small-caps;}
|
|||
|
div.columns{display: flex; gap: min(4vw, 1.5em);}
|
|||
|
div.column{flex: auto; overflow-x: auto;}
|
|||
|
div.hanging-indent{margin-left: 1.5em; text-indent: -1.5em;}
|
|||
|
|
|||
|
ul.task-list[class]{list-style: none;}
|
|||
|
ul.task-list li input[type="checkbox"] {
|
|||
|
font-size: inherit;
|
|||
|
width: 0.8em;
|
|||
|
margin: 0 0.8em 0.2em -1.6em;
|
|||
|
vertical-align: middle;
|
|||
|
}
|
|||
|
.display.math{display: block; text-align: center; margin: 0.5rem auto;}
|
|||
|
|
|||
|
pre > code.sourceCode { white-space: pre; position: relative; }
|
|||
|
pre > code.sourceCode > span { line-height: 1.25; }
|
|||
|
pre > code.sourceCode > span:empty { height: 1.2em; }
|
|||
|
.sourceCode { overflow: visible; }
|
|||
|
code.sourceCode > span { color: inherit; text-decoration: inherit; }
|
|||
|
div.sourceCode { margin: 1em 0; }
|
|||
|
pre.sourceCode { margin: 0; }
|
|||
|
@media screen {
|
|||
|
div.sourceCode { overflow: auto; }
|
|||
|
}
|
|||
|
@media print {
|
|||
|
pre > code.sourceCode { white-space: pre-wrap; }
|
|||
|
pre > code.sourceCode > span { text-indent: -5em; padding-left: 5em; }
|
|||
|
}
|
|||
|
pre.numberSource code
|
|||
|
{ counter-reset: source-line 0; }
|
|||
|
pre.numberSource code > span
|
|||
|
{ position: relative; left: -4em; counter-increment: source-line; }
|
|||
|
pre.numberSource code > span > a:first-child::before
|
|||
|
{ content: counter(source-line);
|
|||
|
position: relative; left: -1em; text-align: right; vertical-align: baseline;
|
|||
|
border: none; display: inline-block;
|
|||
|
-webkit-touch-callout: none; -webkit-user-select: none;
|
|||
|
-khtml-user-select: none; -moz-user-select: none;
|
|||
|
-ms-user-select: none; user-select: none;
|
|||
|
padding: 0 4px; width: 4em;
|
|||
|
color: #aaaaaa;
|
|||
|
}
|
|||
|
pre.numberSource { margin-left: 3em; border-left: 1px solid #aaaaaa; padding-left: 4px; }
|
|||
|
div.sourceCode
|
|||
|
{ }
|
|||
|
@media screen {
|
|||
|
pre > code.sourceCode > span > a:first-child::before { text-decoration: underline; }
|
|||
|
}
|
|||
|
code span.al { color: #ff0000; font-weight: bold; }
|
|||
|
code span.an { color: #60a0b0; font-weight: bold; font-style: italic; }
|
|||
|
code span.at { color: #7d9029; }
|
|||
|
code span.bn { color: #40a070; }
|
|||
|
code span.bu { color: #008000; }
|
|||
|
code span.cf { color: #007020; font-weight: bold; }
|
|||
|
code span.ch { color: #4070a0; }
|
|||
|
code span.cn { color: #880000; }
|
|||
|
code span.co { color: #60a0b0; font-style: italic; }
|
|||
|
code span.cv { color: #60a0b0; font-weight: bold; font-style: italic; }
|
|||
|
code span.do { color: #ba2121; font-style: italic; }
|
|||
|
code span.dt { color: #902000; }
|
|||
|
code span.dv { color: #40a070; }
|
|||
|
code span.er { color: #ff0000; font-weight: bold; }
|
|||
|
code span.ex { }
|
|||
|
code span.fl { color: #40a070; }
|
|||
|
code span.fu { color: #06287e; }
|
|||
|
code span.im { color: #008000; font-weight: bold; }
|
|||
|
code span.in { color: #60a0b0; font-weight: bold; font-style: italic; }
|
|||
|
code span.kw { color: #007020; font-weight: bold; }
|
|||
|
code span.op { color: #666666; }
|
|||
|
code span.ot { color: #007020; }
|
|||
|
code span.pp { color: #bc7a00; }
|
|||
|
code span.sc { color: #4070a0; }
|
|||
|
code span.ss { color: #bb6688; }
|
|||
|
code span.st { color: #4070a0; }
|
|||
|
code span.va { color: #19177c; }
|
|||
|
code span.vs { color: #4070a0; }
|
|||
|
code span.wa { color: #60a0b0; font-weight: bold; font-style: italic; }
|
|||
|
</style>
|
|||
|
<style type="text/css">html{
|
|||
|
font-family:sans-serif;
|
|||
|
-ms-text-size-adjust:100%;
|
|||
|
-webkit-text-size-adjust:100%
|
|||
|
}
|
|||
|
body{
|
|||
|
background-color: #11111b;
|
|||
|
color: #cdd6f4;
|
|||
|
margin:0;
|
|||
|
}
|
|||
|
a{
|
|||
|
color: #a6d189;
|
|||
|
background:transparent
|
|||
|
}
|
|||
|
a:active,a:hover{
|
|||
|
outline:0
|
|||
|
}
|
|||
|
b,strong{
|
|||
|
font-weight:bold
|
|||
|
}
|
|||
|
h1{
|
|||
|
color: #eba0ac;
|
|||
|
font-size:2em;
|
|||
|
margin:0.67em 0
|
|||
|
}
|
|||
|
small{
|
|||
|
font-size:80%
|
|||
|
}
|
|||
|
sub,sup{
|
|||
|
color: #b5bfe2
|
|||
|
font-size:75%;
|
|||
|
line-height:0;
|
|||
|
position:relative;
|
|||
|
vertical-align:baseline
|
|||
|
}
|
|||
|
sup{
|
|||
|
top:-0.5em
|
|||
|
}
|
|||
|
sub{
|
|||
|
bottom:-0.25em
|
|||
|
}
|
|||
|
img{
|
|||
|
border:0
|
|||
|
}
|
|||
|
hr{
|
|||
|
-moz-box-sizing:content-box;
|
|||
|
box-sizing:content-box;
|
|||
|
height:0;
|
|||
|
}
|
|||
|
pre{
|
|||
|
overflow:auto
|
|||
|
}
|
|||
|
code,kbd,pre,samp{
|
|||
|
font-family:monospace, monospace;
|
|||
|
font-size:1em;
|
|||
|
color: #414559;
|
|||
|
}
|
|||
|
textarea{
|
|||
|
overflow:auto
|
|||
|
}
|
|||
|
table{
|
|||
|
border-collapse:collapse;
|
|||
|
border-spacing:0
|
|||
|
}
|
|||
|
td,th{
|
|||
|
padding:0
|
|||
|
}
|
|||
|
body,code,tr.odd,tr.even,body{
|
|||
|
line-height:1.3;
|
|||
|
text-align:justify;
|
|||
|
-moz-hyphens:auto;
|
|||
|
-ms-hyphens:auto;
|
|||
|
-webkit-hyphens:auto;
|
|||
|
hyphens:auto
|
|||
|
}
|
|||
|
@media (max-width: 400px){
|
|||
|
body{
|
|||
|
font-size:12px;
|
|||
|
margin-left:10px;
|
|||
|
margin-right:10px;
|
|||
|
margin-top:10px;
|
|||
|
margin-bottom:15px
|
|||
|
}
|
|||
|
}
|
|||
|
@media (min-width: 401px) and (max-width: 600px){
|
|||
|
body{
|
|||
|
font-size:14px;
|
|||
|
margin-left:10px;
|
|||
|
margin-right:10px;
|
|||
|
margin-top:10px;
|
|||
|
margin-bottom:15px
|
|||
|
}
|
|||
|
}
|
|||
|
@media (min-width: 601px) and (max-width: 900px){
|
|||
|
body{
|
|||
|
font-size:15px;
|
|||
|
margin-left:100px;
|
|||
|
margin-right:100px;
|
|||
|
margin-top:20px;
|
|||
|
margin-bottom:25px
|
|||
|
}
|
|||
|
}
|
|||
|
@media (min-width: 901px) and (max-width: 1800px){
|
|||
|
body{
|
|||
|
font-size:17px;
|
|||
|
margin-left:200px;
|
|||
|
margin-right:200px;
|
|||
|
margin-top:30px;
|
|||
|
margin-bottom:25px;
|
|||
|
max-width:800px
|
|||
|
}
|
|||
|
}
|
|||
|
@media (min-width: 1801px){
|
|||
|
body{
|
|||
|
font-size:18px;
|
|||
|
margin-left:20%;
|
|||
|
margin-right:20%;
|
|||
|
margin-top:30px;
|
|||
|
margin-bottom:25px;
|
|||
|
max-width:1000px
|
|||
|
}
|
|||
|
}
|
|||
|
p{
|
|||
|
margin-top:10px;
|
|||
|
margin-bottom:18px
|
|||
|
}
|
|||
|
em{
|
|||
|
font-style:italic
|
|||
|
}
|
|||
|
strong{
|
|||
|
font-weight:bold
|
|||
|
}
|
|||
|
h1,h2,h3,h4,h5,h6{
|
|||
|
font-weight:bold;
|
|||
|
padding-top:0.25em;
|
|||
|
margin-bottom:0.15em
|
|||
|
}
|
|||
|
header{
|
|||
|
line-height:2.475em;
|
|||
|
padding-bottom:0.7em;
|
|||
|
border-bottom:1px solid #bbb;
|
|||
|
margin-bottom:1.2em
|
|||
|
}
|
|||
|
header>h1{
|
|||
|
border:none;
|
|||
|
padding:0;
|
|||
|
margin:0;
|
|||
|
font-size:225%
|
|||
|
}
|
|||
|
header>h2{
|
|||
|
border:none;
|
|||
|
padding:0;
|
|||
|
margin:0;
|
|||
|
font-style:normal;
|
|||
|
font-size:175%
|
|||
|
}
|
|||
|
header>h3{
|
|||
|
padding:0;
|
|||
|
margin:0;
|
|||
|
font-size:125%;
|
|||
|
font-style:italic
|
|||
|
}
|
|||
|
header+h1{
|
|||
|
border-top:none;
|
|||
|
padding-top:0px
|
|||
|
}
|
|||
|
h1{
|
|||
|
border-top:1px solid #bbb;
|
|||
|
padding-top:15px;
|
|||
|
font-size:150%;
|
|||
|
margin-bottom:10px
|
|||
|
}
|
|||
|
h1:first-of-type{
|
|||
|
border:none
|
|||
|
}
|
|||
|
h2{
|
|||
|
font-size:125%;
|
|||
|
font-style:italic
|
|||
|
}
|
|||
|
h3{
|
|||
|
font-size:105%;
|
|||
|
font-style:italic
|
|||
|
}
|
|||
|
hr{
|
|||
|
border:0px;
|
|||
|
border-top:1px solid #bbb;
|
|||
|
width:100%;
|
|||
|
height:0px
|
|||
|
}
|
|||
|
hr+h1{
|
|||
|
border-top:none;
|
|||
|
padding-top:0px
|
|||
|
}
|
|||
|
ul,ol{
|
|||
|
font-size:90%;
|
|||
|
margin-top:10px;
|
|||
|
margin-bottom:15px;
|
|||
|
padding-left:30px
|
|||
|
}
|
|||
|
ul{
|
|||
|
list-style:circle
|
|||
|
}
|
|||
|
ol{
|
|||
|
list-style:decimal
|
|||
|
}
|
|||
|
ul ul,ol ol,ul ol,ol ul{
|
|||
|
font-size:inherit
|
|||
|
}
|
|||
|
li{
|
|||
|
margin-top:5px;
|
|||
|
margin-bottom:7px
|
|||
|
}
|
|||
|
q,blockquote,dd{
|
|||
|
font-style:italic;
|
|||
|
font-size:90%
|
|||
|
}
|
|||
|
blockquote,dd{
|
|||
|
quotes:none;
|
|||
|
border-left:0.35em #bbb solid;
|
|||
|
padding-left:1.15em;
|
|||
|
margin:0 1.5em 0 0
|
|||
|
}
|
|||
|
blockquote blockquote,dd blockquote,blockquote dd,dd dd,ol blockquote,ol dd,ul blockquote,ul dd,blockquote ol,dd ol,blockquote ul,dd ul{
|
|||
|
font-size:inherit
|
|||
|
}
|
|||
|
a,a:link,a:visited,a:hover{
|
|||
|
text-decoration:none;
|
|||
|
border-bottom:1px dashed #111
|
|||
|
}
|
|||
|
a:hover,a:link:hover,a:visited:hover,a:hover:hover{
|
|||
|
border-bottom-style:solid
|
|||
|
}
|
|||
|
a.footnoteRef,a:link.footnoteRef,a:visited.footnoteRef,a:hover.footnoteRef{
|
|||
|
border-bottom:none;
|
|||
|
color:#666
|
|||
|
}
|
|||
|
code{
|
|||
|
font-family:"Source Code Pro","Consolas","Monaco",monospace;
|
|||
|
font-size:85%;
|
|||
|
background-color:#ddd;
|
|||
|
border:1px solid #bbb;
|
|||
|
padding:0px 0.15em 0px 0.15em;
|
|||
|
-webkit-border-radius:3px;
|
|||
|
-moz-border-radius:3px;
|
|||
|
border-radius:3px
|
|||
|
}
|
|||
|
pre{
|
|||
|
margin-right:1.5em;
|
|||
|
display:block
|
|||
|
}
|
|||
|
pre>code{
|
|||
|
display:block;
|
|||
|
font-size:70%;
|
|||
|
padding:10px;
|
|||
|
-webkit-border-radius:5px;
|
|||
|
-moz-border-radius:5px;
|
|||
|
border-radius:5px;
|
|||
|
overflow-x:auto
|
|||
|
}
|
|||
|
blockquote pre,dd pre,ul pre,ol pre{
|
|||
|
margin-left:0;
|
|||
|
margin-right:0
|
|||
|
}
|
|||
|
blockquote pre>code,dd pre>code,ul pre>code,ol pre>code{
|
|||
|
font-size:77.77778%
|
|||
|
}
|
|||
|
caption,figcaption{
|
|||
|
font-size:80%;
|
|||
|
font-style:italic;
|
|||
|
text-align:right;
|
|||
|
margin-bottom:5px
|
|||
|
}
|
|||
|
caption:empty,figcaption:empty{
|
|||
|
display:none
|
|||
|
}
|
|||
|
table{
|
|||
|
width:100%;
|
|||
|
margin-top:1em;
|
|||
|
margin-bottom:1em
|
|||
|
}
|
|||
|
table+h1{
|
|||
|
border-top:none
|
|||
|
}
|
|||
|
tr td,tr th{
|
|||
|
padding:0.2em 0.7em
|
|||
|
}
|
|||
|
tr.header{
|
|||
|
border-top:1px solid #222;
|
|||
|
border-bottom:1px solid #222;
|
|||
|
font-weight:700
|
|||
|
}
|
|||
|
tr.odd{
|
|||
|
background-color:#eee
|
|||
|
}
|
|||
|
tr.even{
|
|||
|
background-color:#ccc
|
|||
|
}
|
|||
|
tbody:last-child{
|
|||
|
border-bottom:1px solid #222
|
|||
|
}
|
|||
|
dt{
|
|||
|
font-weight:700
|
|||
|
}
|
|||
|
dt:after{
|
|||
|
font-weight:normal;
|
|||
|
content:":"
|
|||
|
}
|
|||
|
dd{
|
|||
|
margin-bottom:10px
|
|||
|
}
|
|||
|
img{
|
|||
|
display:block;
|
|||
|
margin:0px auto;
|
|||
|
padding:0px;
|
|||
|
max-width:100%
|
|||
|
}
|
|||
|
figcaption{
|
|||
|
margin:5px 10px 5px 30px
|
|||
|
}
|
|||
|
.footnotes{
|
|||
|
color:#666;
|
|||
|
font-size:70%;
|
|||
|
font-style:italic
|
|||
|
}
|
|||
|
.footnotes li p:last-child a:last-child{
|
|||
|
border-bottom:none
|
|||
|
}
|
|||
|
</style>
|
|||
|
<!--[if lt IE 9]>
|
|||
|
<script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
|
|||
|
<![endif]-->
|
|||
|
</head>
|
|||
|
<body>
|
|||
|
<nav id="TOC" role="doc-toc">
|
|||
|
<ul>
|
|||
|
<li><a href="#step-0-clone-the-repository" id="toc-step-0-clone-the-repository">Step 0: Clone the
|
|||
|
repository</a></li>
|
|||
|
<li><a href="#project-structure" id="toc-project-structure">Project
|
|||
|
Structure</a>
|
|||
|
<ul>
|
|||
|
<li><a href="#top-level" id="toc-top-level">Top Level</a></li>
|
|||
|
<li><a href="#cascades" id="toc-cascades">./Cascades</a></li>
|
|||
|
<li><a href="#traning_data" id="toc-traning_data">./Traning_data</a></li>
|
|||
|
<li><a href="#validation" id="toc-validation">./Validation</a></li>
|
|||
|
</ul></li>
|
|||
|
<li><a href="#step-1---prerequisites" id="toc-step-1---prerequisites">Step 1 - Prerequisites</a>
|
|||
|
<ul>
|
|||
|
<li><a href="#set-up-virtual-environment" id="toc-set-up-virtual-environment">Set up virtual environment</a></li>
|
|||
|
<li><a href="#install-requirements" id="toc-install-requirements">Install requirements</a></li>
|
|||
|
</ul></li>
|
|||
|
<li><a href="#step-2---running-the-project" id="toc-step-2---running-the-project">Step 2 - Running the
|
|||
|
project</a></li>
|
|||
|
<li><a href="#additional-flags" id="toc-additional-flags">Additional
|
|||
|
flags</a>
|
|||
|
<ul>
|
|||
|
<li><a href="#help" id="toc-help">Help</a></li>
|
|||
|
<li><a href="#version" id="toc-version">Version</a></li>
|
|||
|
<li><a href="#show-dashboard" id="toc-show-dashboard">Show
|
|||
|
Dashboard</a></li>
|
|||
|
<li><a href="#output-adjustment-instructions" id="toc-output-adjustment-instructions">Output Adjustment
|
|||
|
Instructions</a></li>
|
|||
|
<li><a href="#use-video-file" id="toc-use-video-file">Use Video
|
|||
|
File</a></li>
|
|||
|
<li><a href="#headless-mode" id="toc-headless-mode">Headless
|
|||
|
Mode</a></li>
|
|||
|
<li><a href="#save-frames-for-training-data" id="toc-save-frames-for-training-data">Save Frames for Training
|
|||
|
Data</a></li>
|
|||
|
<li><a href="#generate-validation-file" id="toc-generate-validation-file">Generate Validation File</a></li>
|
|||
|
</ul></li>
|
|||
|
<li><a href="#training-your-own-haar-file" id="toc-training-your-own-haar-file">Training Your Own Haar File</a>
|
|||
|
<ul>
|
|||
|
<li><a href="#prerequisites" id="toc-prerequisites">Prerequisites</a></li>
|
|||
|
<li><a href="#generating-positive-images" id="toc-generating-positive-images">Generating positive images</a></li>
|
|||
|
</ul></li>
|
|||
|
<li><a href="#validation-and-testing" id="toc-validation-and-testing">Validation and Testing</a>
|
|||
|
<ul>
|
|||
|
<li><a href="#generate-the-ground-truth-file" id="toc-generate-the-ground-truth-file">Generate the Ground Truth
|
|||
|
file</a></li>
|
|||
|
<li><a href="#getting-the-model-validation-file" id="toc-getting-the-model-validation-file">Getting the model validation
|
|||
|
file</a></li>
|
|||
|
<li><a href="#comparing-it-to-the-ground-truth" id="toc-comparing-it-to-the-ground-truth">Comparing it to the ground
|
|||
|
truth</a></li>
|
|||
|
</ul></li>
|
|||
|
</ul>
|
|||
|
</nav>
|
|||
|
<h1 id="step-0-clone-the-repository">Step 0: Clone the repository</h1>
|
|||
|
<p>Before you can run this project, you will need to clone the git
|
|||
|
repository with the following command:</p>
|
|||
|
<div class="sourceCode" id="cb1"><pre class="sourceCode sh"><code class="sourceCode bash"><span id="cb1-1"><a href="#cb1-1" aria-hidden="true" tabindex="-1"></a><span class="fu">git</span> clone https://git.nickiel.net/Nickiel/WGU-Capstone</span></code></pre></div>
|
|||
|
<p>See <a href="#project-structure">Project Structure</a> for more
|
|||
|
information on the repository you just cloned.</p>
|
|||
|
<p>See <a href="#step-1---prerequisites">Step 1 - Prerequisites</a> on
|
|||
|
what is required before you can run this project.</p>
|
|||
|
<h1 id="project-structure">Project Structure</h1>
|
|||
|
<p>Below you can find the default project folder structure after cloning
|
|||
|
it:</p>
|
|||
|
<pre><code>WGU-Capstone
|
|||
|
├.gitignore
|
|||
|
├Main.py
|
|||
|
├README.md
|
|||
|
├WGU-Capstone-User-Guide.html
|
|||
|
├requirements.txt
|
|||
|
├shell.nix
|
|||
|
├cascades
|
|||
|
│ ├ cascade_1.xml
|
|||
|
│ ├ cascade_2.xml
|
|||
|
│ ├ cascade_5.xml
|
|||
|
│ └ cascade_10.xml
|
|||
|
├training_data
|
|||
|
│ ├ positives
|
|||
|
│ └ training_data_setup.py
|
|||
|
└validation
|
|||
|
├ TestVideo.mp4
|
|||
|
├ compare_to_gt.py
|
|||
|
├ create_ground_truth.py
|
|||
|
└ ground_truth.txt</code></pre>
|
|||
|
<p><a href="#step-1---prerequisites">Click here to skip the detailed
|
|||
|
file structure explaination</a></p>
|
|||
|
<h2 id="top-level">Top Level</h2>
|
|||
|
<p>In the top-level of the cloned repository, you will find most of the
|
|||
|
files required for the core-fuctionality.</p>
|
|||
|
<h4 id="gitignore">.gitignore</h4>
|
|||
|
<p>This file excludes files we don’t want to check into git - such as
|
|||
|
the training data. These files continue to exist on machine, but they
|
|||
|
are not uploaded to the remote git repository. This is very helpful to
|
|||
|
keep the clone sizes down, and upload/download speeds up.</p>
|
|||
|
<h4 id="main.py">Main.py</h4>
|
|||
|
<p>The main file of interest for this project. This file contains all of
|
|||
|
the code for the finished product. As long as this file is in the same
|
|||
|
folder as the <code>./cascades</code> folder, it can be copied and run
|
|||
|
anywhere with the prerequisites installed.</p>
|
|||
|
<h4 id="readme.md">README.md</h4>
|
|||
|
<p>The file you are reading in a format that most git hosting servers
|
|||
|
automatically render as the home-page.</p>
|
|||
|
<h4 id="wgu-capstone-user-guide.html">WGU-Capstone-User-Guide.html</h4>
|
|||
|
<p>The html version of the README.md file that was bundled with CSS and
|
|||
|
hyper-links.</p>
|
|||
|
<h4 id="requirements.txt">requirements.txt</h4>
|
|||
|
<p>The file that contains all of the python pip requiremnts to run. This
|
|||
|
packages in this file can either be installed by hand
|
|||
|
(e.g. <code>pip install opencv-python</code>), or can all be installed
|
|||
|
at once with <code>pip install -r requirements.txt</code> which will
|
|||
|
install all of the modules needed to run this project that are not
|
|||
|
included in the standard library.</p>
|
|||
|
<h4 id="shell.nix">shell.nix</h4>
|
|||
|
<p>A file that can be used on <a href="https://nixos.org/">Nix and
|
|||
|
NixOS</a> systems to create a reproducable environement with all of the
|
|||
|
requirements to run the <code>Main.py</code> file.</p>
|
|||
|
<h2 id="cascades">./Cascades</h2>
|
|||
|
<p>This folder contains the final trained models created by this project
|
|||
|
in the model training step. For more information on how they were
|
|||
|
created, see <a href="#training-your-own-haar-file">Training your own
|
|||
|
Haar file</a> below.</p>
|
|||
|
<p>This folder needs to be in the same directory as the
|
|||
|
<code>Main.py</code> file for the <code>Main.py</code> file to be able
|
|||
|
to run.</p>
|
|||
|
<h2 id="traning_data">./Traning_data</h2>
|
|||
|
<p>This folder contains all of the requirements for creating a new model
|
|||
|
from a few un-catagorized positive images, and a large dataset of
|
|||
|
negatives.</p>
|
|||
|
<p>NOTE: Before anything in this folder can be run, please see <a href="#training-your-own-haar-file">the section on training the haar
|
|||
|
files</a> for several prerequisites.</p>
|
|||
|
<h4 id="training_data_setup.py">./Training_data_setup.py</h4>
|
|||
|
<p>This python file takes a large data-set of negative images from the
|
|||
|
<code>./training_data/negatives</code> folder and creates .vec files
|
|||
|
that can be passed as an arguement to the utility that trains the final
|
|||
|
Haar file.</p>
|
|||
|
<h4 id="positives">./Positives</h4>
|
|||
|
<p>This folder contains the 10 images that were used to create the
|
|||
|
cascade files included in this project. These files were included
|
|||
|
because the 10 images are a very small dataset in comparison to the
|
|||
|
required negatives.</p>
|
|||
|
<h2 id="validation">./Validation</h2>
|
|||
|
<p>The folder contains all of the scripts and files used to measure the
|
|||
|
performance and accuracy of the generated models.</p>
|
|||
|
<h4 id="testvideo.mp4">TestVideo.mp4</h4>
|
|||
|
<p>This minute-long video was used to test the trained models.</p>
|
|||
|
<h4 id="compare_to_gt.py">Compare_to_gt.py</h4>
|
|||
|
<p>This file compares the output of a <code>--validate</code> output
|
|||
|
file generated by <code>Main.py</code> of a run with the provided
|
|||
|
<code>ground_truth.txt</code> file. The output of this file is a .csv
|
|||
|
file that describes the average deviation from the boxes described by
|
|||
|
the <code>ground_truth.txt</code> file. See <a href="#validation-and-testing">Validation and Testing</a> for more
|
|||
|
information on this process.</p>
|
|||
|
<h4 id="create_ground_truth.py">Create_ground_truth.py</h4>
|
|||
|
<p>This is the file used to create the <code>ground_truth.txt</code>
|
|||
|
file from the provided <code>TestVideo.mp4</code>.</p>
|
|||
|
<h1 id="step-1---prerequisites">Step 1 - Prerequisites</h1>
|
|||
|
<p>Before you can run this project, you need a python environment with
|
|||
|
the required packages installed.</p>
|
|||
|
<p>If you are using Nix or NixOS, simply run <code>nix shell</code> in
|
|||
|
the <code>WGU-Capstone</code> folder, and all of the packages required
|
|||
|
to run <code>Main.py</code> will be installed for that shell
|
|||
|
session.</p>
|
|||
|
<p>However, if you are not on a Nix system, continue reading.</p>
|
|||
|
<p>The steps below detail how to set up a virtual environment that can
|
|||
|
be used to run this project, but a system-wide install of python with
|
|||
|
the packages detailed in <code>requirements.txt</code> installed will
|
|||
|
also suffice.</p>
|
|||
|
<h3 id="set-up-virtual-environment">Set up virtual environment</h3>
|
|||
|
<p>This project was created with python 3.11, and other versions are not
|
|||
|
garunteed to work. So to ensure the project works as designed, install
|
|||
|
python 3.11 from the official python download page.</p>
|
|||
|
<p>Once you have python 3.11 installed on your system, navigate to the
|
|||
|
cloned repository’s root directory, and run the following command to
|
|||
|
create a new virtual environement:</p>
|
|||
|
<div class="sourceCode" id="cb3"><pre class="sourceCode python"><code class="sourceCode python"><span id="cb3-1"><a href="#cb3-1" aria-hidden="true" tabindex="-1"></a>python <span class="op">-</span>m venv .<span class="op">/</span>venv</span></code></pre></div>
|
|||
|
<p>You can now run the following commands to enter the virtual
|
|||
|
environment, and any python commands will be run inside the virtual
|
|||
|
environment instead of your system-wide installation.</p>
|
|||
|
<p>On windows run the following if you are using a cmd prompt:</p>
|
|||
|
<pre class="shell"><code>.\venv\Scripts\activate.bat</code></pre>
|
|||
|
<p>On windows in powershell:</p>
|
|||
|
<pre class="shell"><code>.\venv\Scripts\Activate.ps1</code></pre>
|
|||
|
<p>If you are on a linux based operating system, enter the virtual
|
|||
|
environment with:</p>
|
|||
|
<pre class="shell"><code>.\venv\Scripts\activate</code></pre>
|
|||
|
<h3 id="install-requirements">Install requirements</h3>
|
|||
|
<p>Now that you have activated the virtual environment, install the
|
|||
|
non-standard library requirements with the below command:</p>
|
|||
|
<pre class="shell"><code>pip install -r ./requirements.txt</code></pre>
|
|||
|
<h1 id="step-2---running-the-project">Step 2 - Running the project</h1>
|
|||
|
<p>Now that the pre-requisites have been installed, you can run the
|
|||
|
project. For a full list of command-line arguements, run
|
|||
|
<code>python Main.py --help</code>.</p>
|
|||
|
<p>Run the project with the dashboard enabled with the following command
|
|||
|
from the root of the project directory:</p>
|
|||
|
<pre class="shell"><code>python Main.py -d</code></pre>
|
|||
|
<p>You should see the web-cam of your computer turn on, and a window
|
|||
|
appear showing the view of the webcam, with boxes around any detected
|
|||
|
faces.</p>
|
|||
|
<p>To display the calculated adjustment amounts generated by this
|
|||
|
project, enable the print-to-stoud feature with the <code>-o</code>
|
|||
|
flag:</p>
|
|||
|
<pre class="shell"><code>python Main.py -d -o</code></pre>
|
|||
|
<p>This command will output the calculated output commands for every
|
|||
|
detected face, and also show the summary statistics.</p>
|
|||
|
<h1 id="additional-flags">Additional flags</h1>
|
|||
|
<p>This section will describe, in greater depth, the available feature
|
|||
|
flags shown by the <code>--help</code> screen.</p>
|
|||
|
<h2 id="help">Help</h2>
|
|||
|
<p><code>-h</code> or <code>--help</code></p>
|
|||
|
<p>Displays all of the available parameters and a quick description</p>
|
|||
|
<h2 id="version">Version</h2>
|
|||
|
<p><code>-v</code> or <code>--version</code></p>
|
|||
|
<p>Prints the version of the program and exits</p>
|
|||
|
<h2 id="show-dashboard">Show Dashboard</h2>
|
|||
|
<p><code>-d</code> or <code>--dasbboard</code></p>
|
|||
|
<p>Display the run-summary statistics, these are off by default.</p>
|
|||
|
<h2 id="output-adjustment-instructions">Output Adjustment
|
|||
|
Instructions</h2>
|
|||
|
<p><code>-o</code> or <code>--output</code></p>
|
|||
|
<p>Print the calculated adjustment instructions generated by the
|
|||
|
program. This output demonstrates the generated values that will be sent
|
|||
|
to the motorized camera platform.</p>
|
|||
|
<h2 id="use-video-file">Use Video File</h2>
|
|||
|
<p><code>-f <file_path></code> or
|
|||
|
<code>--file <file_path></code></p>
|
|||
|
<p>Use a video file (such as ./validation/TestVideo.mp4) instead of the
|
|||
|
computer’s webcam. Useful for generating validation files and on
|
|||
|
machines without a working webcam.</p>
|
|||
|
<h2 id="headless-mode">Headless Mode</h2>
|
|||
|
<p><code>-s</code> or <code>--no-screen</code></p>
|
|||
|
<p>Run the program without the window displaying processed video
|
|||
|
frames.</p>
|
|||
|
<h2 id="save-frames-for-training-data">Save Frames for Training
|
|||
|
Data</h2>
|
|||
|
<p><code>-t</code> or <code>--training-data</code></p>
|
|||
|
<p>Save frames where faces were found to <code>./output</code> as .jpg
|
|||
|
files, and save the located face’s location to a .csv file. This feature
|
|||
|
will be used to generate positive images automatically for training
|
|||
|
future models.</p>
|
|||
|
<h2 id="generate-validation-file">Generate Validation File</h2>
|
|||
|
<p><code>--validate</code></p>
|
|||
|
<p>Outputs all discovered boxes, the frame they were found on, and the
|
|||
|
box coordinates so the model can be validated against the ground truth.
|
|||
|
See <a href="#validation-and-testing">validation and testing</a> for
|
|||
|
more information on this process.</p>
|
|||
|
<h1 id="training-your-own-haar-file">Training Your Own Haar File</h1>
|
|||
|
<p>This project contains the scripts required to train your own Haar
|
|||
|
cascade files, but it does not contain several of the dependencies.</p>
|
|||
|
<p>NOTE: These steps only apply to Windows devices.</p>
|
|||
|
<h2 id="prerequisites">Prerequisites</h2>
|
|||
|
<p>The first requirement needed before you can train your own Haar file,
|
|||
|
is a large number of negative images. For this project, I used <a href="https://www.kaggle.com/datasets/arnaud58/landscape-pictures/">this
|
|||
|
Kaggle dataset of landscape images</a> as my negatives datasource. After
|
|||
|
downloading this file, unzip it and deposit all of the raw images into
|
|||
|
the <code>./training_data/negatives</code> folder - create it if
|
|||
|
needed.</p>
|
|||
|
<p>Next we need to download the windows OpenCV binary distributable and
|
|||
|
put it in our training_data folder.</p>
|
|||
|
<p>You can download the 3.4.15 binary executable <a href="https://sourceforge.net/projects/opencvlibrary/files/3.4.15/opencv-3.4.15-vc14_vc15.exe/download">here</a>.
|
|||
|
(You can also go <a href="https://opencv.org/releases/">here</a> and
|
|||
|
find the 3.4.15 release and choose “windows” to get to the same
|
|||
|
page).</p>
|
|||
|
<p>After the .exe file has downloaded, open it and go through the steps
|
|||
|
to unzip it. After it has been unzipped, copy the folder to
|
|||
|
<code>./training_data/opencv</code>. So you should be able to run this
|
|||
|
from the training_data directory:</p>
|
|||
|
<pre class="shell"><code>.\opencv\build\x64\vc15\bin\opencv_createsamples.exe</code></pre>
|
|||
|
<p>If you do not get an error running the above command, then it was
|
|||
|
installed correctly.</p>
|
|||
|
<h2 id="generating-positive-images">Generating positive images</h2>
|
|||
|
<p>Now that we have the create_samples utility provided by OpenCV (they
|
|||
|
stopped distributing executables of it after 3.4.15) and the negatives
|
|||
|
folder full of negative images, we can use the
|
|||
|
<code>training_data_setup.py</code> file to create several different
|
|||
|
sized datasets ready for training Haar cascade files on.</p>
|
|||
|
<p>The python file will run the create_samples tool for every positive
|
|||
|
image in <code>./positives</code>, creating many positive images. The
|
|||
|
script will do all of the steps up through creating the .vec files that
|
|||
|
the train_cascade exectuable requires.</p>
|
|||
|
<p>Before exiting, training_data_setup outputs the commands that need to
|
|||
|
be run to train the models. Run these commands from the training_data
|
|||
|
folder, and after they have finished training, you can use the generated
|
|||
|
Haar cascades instead of the ones provided.</p>
|
|||
|
<h1 id="validation-and-testing">Validation and Testing</h1>
|
|||
|
<p>The following describes the process I used to test the precision and
|
|||
|
accuracy of the generated cascade files.</p>
|
|||
|
<h2 id="generate-the-ground-truth-file">Generate the Ground Truth
|
|||
|
file</h2>
|
|||
|
<p>I have included a generated <code>ground_truth.txt</code> file, so
|
|||
|
you don’t need to do this step. But if you would like to generate the
|
|||
|
ground truth file from the provided test video, navigate to root of the
|
|||
|
project, and run the create ground truth script:</p>
|
|||
|
<pre class="shell"><code>python create_ground_truth.py</code></pre>
|
|||
|
<p>A window will open and display the process as it creates the file.
|
|||
|
This script does not utilize Haar files, but the MIL tracking algorithm,
|
|||
|
which results in much more accurate results, but a slower processing
|
|||
|
speed for the video.</p>
|
|||
|
<p>All of these settings have been hard-coded so it will always output
|
|||
|
the same ground truth file.</p>
|
|||
|
<h2 id="getting-the-model-validation-file">Getting the model validation
|
|||
|
file</h2>
|
|||
|
<p>Now that we have the ground truth for our Test Video, we need to
|
|||
|
generate the same file with our trained model.</p>
|
|||
|
<p>To do this, edit the <code>Main.py</code> file so that it uses the
|
|||
|
new cascade, then run the python file with the <code>--validate</code>
|
|||
|
option set, and the test video passed to the <code>-f</code> flag. The
|
|||
|
command used to generate the statistics with the test video provided is
|
|||
|
this:</p>
|
|||
|
<pre class="shell"><code>python ./Main.py -d -f ./validation/TestVideo.mp4 --validate</code></pre>
|
|||
|
<p>(Notice that we can still display the dashboard while it outputs
|
|||
|
validation info)</p>
|
|||
|
<p>This will create a new file in the <code>./validation</code> folder
|
|||
|
describing the faces and locations found in each frame.</p>
|
|||
|
<h2 id="comparing-it-to-the-ground-truth">Comparing it to the ground
|
|||
|
truth</h2>
|
|||
|
<p>I have created a script to automatically compare a validation file
|
|||
|
with a ground truth file, and output the average absolute deviation in
|
|||
|
adjustment instructions. It requires two arguements, and has one
|
|||
|
optional output. You can see the options with the <code>--help</code>
|
|||
|
flag, but I will demonstrate all of the options below.</p>
|
|||
|
<p>You can use <code>./validation/compare_to_gt.py</code> like this:</p>
|
|||
|
<pre class="shell"><code>cd ./validation
|
|||
|
python compare_to_gt ./ground_truth.txt ./20231012-081235.txt ./output.csv --faces_count_file ./faces_count_output.csv</code></pre>
|
|||
|
<p>This script will then take the generated test validation file, and
|
|||
|
get what the generated adjustment output would be, and gets the absolute
|
|||
|
difference between it and the ground truth, then it adds together all
|
|||
|
results for each frame - this last part penalizes false positives. We
|
|||
|
can then take the generated output file, and open it in Excel. We can
|
|||
|
take the average of it to see what the average deviation from the ground
|
|||
|
truth would be. The generated faces_count_output file contains the
|
|||
|
number of faces found in each frame, and can be used to measure the
|
|||
|
number of false positives.</p>
|
|||
|
</body>
|
|||
|
</html>
|