1 line
17 KiB
JavaScript
1 line
17 KiB
JavaScript
|
'use strict';(function(){const t={cache:!0};t.doc={id:"id",field:["title","content"],store:["title","href","section"]};const e=FlexSearch.create("balance",t);window.bookSearchIndex=e,e.add({id:0,href:"/docs/developer/",title:"Developer Documentation",section:"Docs",content:"This is the developer documentation. (Work-in-progress)\n"}),e.add({id:1,href:"/docs/user/",title:"User Documentation",section:"Docs",content:"This is the user documentation. (Work-in-progress)\n"}),e.add({id:2,href:"/posts/week-9/",title:"Week 9",section:"Blog",content:"This week I worked on getting focus detection working. I implemented basic laplatian blur detection1 and fast fourier blur detection2.\nFinding the threshold for both can be a challenge\nGeneral pipeline # I continued to develop a general pipeline to fit all the filters in\n https://pyimagesearch.com/2015/09/07/blur-detection-with-opencv/\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n https://pyimagesearch.com/2020/06/15/opencv-fast-fourier-transform-fft-for-blur-detection-in-images-and-video-streams/\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n "}),e.add({id:3,href:"/posts/week-8/",title:"Week 8",section:"Blog",content:"Monday # Towards the end of last week ( Week 7), I managed to refactor my code in order to make it more portable. This allowed me to train my model on different machines. I ran my training script on the uni\u0026rsquo;s GPU compute successfully for 20 epochs. The next stage was to train it for longer and analyse the results. On the Monday morning I adjusted the parameters of my training script to train for 2000 epochs instead.\nTuesday # Tuesday afternoon the training had finished and I had a model that was trained on 2000 epochs. This gave me a day to analyse the results and do some rough predictions before my mid-project demo on the Wednesday.\nTraining and validation loss graphs # As we can see from the 2000 Epochs graph, the loss seems to plateau at around 60 epochs. The training loss seems to even out with more accuracy than the validation loss. This means that our data isn\u0026rsquo;t fully learning what I want it to. Also it\u0026rsquo;s overfitting slightly as it\u0026rsquo;s better at predicting the training set than the validation set. The variance in the validation set shows that the features it\u0026rsquo;s decided to learn aren\u0026rsquo;t the right features to confidently predict aesthetics in this dataset.\nFor the rest of the day I worked on my prediction script so I could use the model to predict new pictures. I also worked on my architecture diagrams and slides for the mid-project demo.\n Due to the nature of how I processed my images (resizing them to 32x32 and then saving them to a tensor then saving them to disk), my prediction script also displayed those down-sized images. This may have also effected the performance of the model.\nWednesday # I spent most of Wednesday morning finishing my slides, diagrams and making example predictions using the prediction script.\n Rest of week # I spent the rest of the week looking at the project\u0026rsquo;s overall pipeline including the non-machine learning filtering. I also started to implement basic focus detection by looking at blur detection using the Laplacian operator1.\n https://pyimagesearch.com/2015/09/07/blur-detection-with-opencv/\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n "}),e.add({id:4,href:"/posts/week-7/",title:"Week 7",section:"Blog",content:"Now that I had successfully run my model without any runtime errors, the next step this week was finding some GPU compute so I can train my model on much more powerful hardware to accelerate the training.\nMy first idea was to use cloud computing. There are machine learning specific cloud technologies, but I didn\u0026rsquo;t want to use these as I didn\u0026rsquo;t want my code to be dependent on the specific ways cloud platforms want the code in. Instead, I wanted to get a general VM with an attached GPU where I could run my workloads manually. I had already written docker images that contained all the depencies of my code that I could deploy to thes
|