Working on Telescope

A short walk-through of what my eyes saw

https://mocah.org/508413-dark-milky-way.html

Hey everyone! For this week’s assignment I’ve been tasked with working on an open source project called Telescope. As per their description, it’s “A tool for tracking blogs in orbit around Seneca’s open source involvement”, i.e. it is a central website for all Seneca’s open source related blogs. I’ve been tasked to do the following: set up Telescope on my local machine (it uses various technologies which are a perquisite), understanding and working with the Telescope REST API, and using my Haystack Link Checker to check the links that are being fed from the API.

To begin, I had to get my local environment ready to work on Telescope. I started by reading their contributing page which pointed me to another page that explained the setup process. This is where things went a bit south. I found the steps for the Windows setup very long. Setting up Redis was a nightmare for me as some of the steps were confusing and returned errors. This led me to talking with others on Slack, to which they helped me immensely by basically telling the gist of how to get it working. I simply installed Docker for desktop, as per their instructions, and to my surprise I got it working right away.

Now to start working with the Telescope REST API. With the environment setup, I ran the Telescope website on my local machine. From here I was able to query my local Telescope’s REST API. However, Haystack was initially only able to handle plain text and HTML files, which meant that it needed an upgrade.

I had two choices, add functionality to be able to enter a URL into the command line to pull my links from there, or download the data from the API and then feed it into my link checker. I opted for the former and added an argument to parse data being fed from the API. My plan was to add an additional argument to Haystack which would tell it to send a GET request to the URL in the command line (the URL in this case would be the API) and obtain the data. I would then create some sort of method to parse the data and return an array of URLs — that way I wouldn’t have to change anything else in my program since it’s originally built to check from an array of URLs (except this time, instead of an array of URLs parsed from a file, it will be an array of URLs parsed from a JSON array). This took a bit of trial and error, but Python makes it easy to work with JSON objects as it has a data structure very similar to a JSON object called a dictionary.

To summarize what I did: I sent a get request to the API which returned an array of JSON objects. These objects were then passed into a method called parse_json which took the data and the original URL for the API. I then added the ID to the end of the URL for each object to get the whole link — to which I then added it to an array and returned it to the main method. From here, the main method works as it originally did and validated every link in the array.

Overall, because I went about it this way I didn’t have to change the main method of validating the links at all, I just had to find a way to get the array of links from a source other than a file (in this case, from an API). In the end, I’m pretty proud that I didn’t have to modify too much in order for this to work and if you want to see the exact changes you can checkout this gist: https://gist.github.com/rjayroso/07e9e504d8737afac69212067df83e93