Hey man, what's going on?
Type your login information below.
Type in your information below to register a new account.
Click any of the streams below to view all posts in that stream, or click a subscribe button next to a stream to enable email notifications whenever an article is posted to that stream.
|#All||(subscribe to all streams)|
|#LabNews||(subscribe to #LabNews posts)|
|#Projects||(subscribe to #Projects posts)|
|#Technology||(subscribe to #Technology posts)|
|#Science||(subscribe to #Science posts)|
|#HowItWorks||(subscribe to #HowItWorks posts)|
|#Opinion||(subscribe to #Opinion posts)|
|#Programming||(subscribe to #Programming posts)|
|#Research||(subscribe to #Research posts)|
For as long as I can remember, I've been obsessed with doing things from scratch. It's a source of satisfaction and pride to succeed in building something completely from the ground up, and with it comes a great deal of in-depth knowledge and understanding about what it is you're actually building. I've been accused multiple times of "reinventing the wheel," and while I strongly dislike both the phrase and the sentiment behind it, there's still a valid point somewhere in there...
In the majority of cases up to this point, I'm glad that I've taken the implement-it-myself approach - I've learned a GREAT deal from past projects (both successes and failures) due to my obstinate refusal to use premade libraries or preexisting solutions. However, it's sadly approaching the point where this is more of a hindrance than an advantage. The field of machine learning is quickly expanding, and it was already pretty big! There are dozens of interesting projects in the field that I'd love to tackle, but I'm held back from actually doing any of them because in my desire to understand machine learning as deeply as I can, I wanted to try and implement all of the common ML algorithms and techniques from scratch and use them for my projects.
There's a problem with this goal: machine learning stuff is HARD! Most of the concepts are extremely math-heavy, and I'm held back in their implementation not necessarily because the implementation itself is difficult, but because I have difficulty in even understanding what's going on/why the concepts work (which is the whole goal behind making them myself!) Thus, implementing something in machine learning from scratch is an incredibly deep, time consuming, and brain-intensive task, meaning that if I stick with this goal, it will be a very long time before I ever actually get to use my homegrown libraries.
Added to this is the fact that in reality, my cobbled, messy implementations would be horribly inefficient and restricted in comparison to the well known pre-existing libraries, such as tensorflow and theano. I can't even use the "it's more flexible if I make it myself" argument!
All of this said, I've reached a decision. I'm officially throwing off my wheel-reinventing-cloak, in favor of actually learning how to use ML libraries and getting to make cool stuff with them!
The important part about this decision is that I'm applying it only so that I'm no longer hindered from using machine learning in my projects, and thus am free to learn how they work from a usability standpoint. I still enjoy building things from scratch, and I firmly stand by the point that making something yourself is one of the best ways of learning it inside and out. I'm therefore instituting a new TYPE of project to throw into my rotation, which I'm calling "academic exercises."
I still 100% want and plan to implement as many machine learning algorithms from scratch as I can, but merely for the purpose of picking them apart and figuring out how they work, rather than utilizing them in projects. An "academic exercise" will consist of researching a concept, and attempted implementation/explanation of all the inner-working parts. Following this, my goal is to write mock "research papers", detailing all the heavy math explorations in what I found in the research, and explaining why it all works. These papers are both for my own knowledge solidification as well as a compiled collection of resources and information for understanding the concept. Following this, in honor of the Feynman technique, I will attempt to explain the concept in as simple a way as I can, watering down all the math into a summary that fits in a single blog post! The goal for this is, again, so that it further solidifies the concept in my head, as well as makes me take a step back from the math and look at it from a different perspective. I hope to be able to explain it so that even someone without a strong math background would be able to understand what's going on, and have the ability to marvel at the elegance and beauty that is machine learning!
When I begin attempting these academic exercises, I will place all papers and work that I do into a new "research" section on this website, and the blog posts that summarize their contents will be posted into a "research" blog stream.
Written 4:41 PM December 10, 2016 by Nathan Martindale (WildfireXIII)
1 person recommended this
Log in to archive this post
Click to subscribe to stream: (Log in to subscribe to streams)
Please log in to comment