GraphQL Key Concepts

What is GraphQL?
GraphQL is a query language for APIs – not databases. In that sense it’s database agnostic and effectively can be used in any context where an API is used. It fetches data from a server where that data is stored in a database.

Why is GraphQL considered better then REST?
– REST too inflexible to keep up with the rapidly changing requirements of the clients that access them
– more flexibility and efficiency needed
– With a REST API, you would typically gather the data by accessing multiple endpoints
– In GraphQL on the other hand, you’d simply send a single query to the GraphQL server that includes the concrete data requirements.

Continue reading →

GraphQL + React, Apollo, GraphCool

I completed the tutorial How to GraphQL, it’s a great tutorial to understand GraphQL. You get to build a Hackernews clone with React, Apollo (caching GraphQL client), GraphQL and GraphCool (GraphQL backend). Choose a different stack if you’d like, there’s options in the tutorial.

This is the Hackernews app I built using the tutorial. I’m so glad that GraphQL exists, you don’t need ten different HTTP endpoints for every data retrieval or manipulation, just one does the job. I can’t believe I haven’t been using this, I’m so happy with it.

A simple illustration from the tutorial (https://www.howtographql.com) that demonstrates how it’s simpler (and better) than REST API..

Data Fetching with REST API

Data Fetching with GraphQL

React Key Concepts

I was given this set of Qs by Emile and found them really useful to understand basic concepts in React. Note that the questions about High Order Component and the component tree lead up to GraphQL.

What’s a component and why do we need it?

Components let you split the UI into independent, reusable pieces, and think about each piece in isolation. Conceptually, components are like JavaScript functions. They accept arbitrary inputs (called “props”) and return React elements describing what should appear on the screen.

The purpose of state and props.

There are two types of data that control a component: props and state. Props are set by the parent and they are fixed throughout the lifetime of a component. For data that is going to change, we have to use state.

Props and state are related. The state of one component will often become the props of a child component. Props are passed to the child within the render method of the parent as the second argument to React.createElement() or, if you’re using JSX, the more familiar tag attributes.

Continue reading →

SEORoberto – Website Crawl & Monitoring tool

I’m finally done with the final project for the Full Stack Foundation course. I built a tool to scan, monitor and manage website audits for search engine and digital marketers.

It’s built with AngularJS, Node.js, Express and MongoDB and took 2 months to complete whilst attending classes full-time. It’s not fully baked but an MVP.

As part of the assessment I was required to do a presentation to investors, you can view my pitch for the app here. Visit https://seoroberto.herokuapp.com to test the app. Here’s a breakdown of the features of this app.

Static Pages

  • Product, pricing and contact pages with information about the product and pricing plans. However payment is not integrated in this MVP.
  • Registration, login with welcome email and logout.

Crawling

  • Input a domain URL and with a click of the button, the tool crawl the links on every webpage and then saves each page’s title, headings, meta description, etc, as well as the user’s ID into the MongoDB database as JSON objects. Only the first 8 links are crawled for demonstration, to keep the database small.

  • View the scanned data in a table where you can drag & drop to rearrange columns, hide columns and sort headers. The headers are sticky headers. There’s the option to export the table to a CSV file too.
  • Search and filter functionality. You can filter the data by date, domain url, or both. You can also search for a specific keyword. It makes for easy comparison and analysis of data. e.g. enter a specific page url and see how the titles/descriptions change over time and how that affected search rankings.
  • Missing labels show empty values on webpage that should be filled in for search engines to pick up.
  • Too-long labels show the no. of characters count in your titles and meta description. Too-long titles and descriptions are truncated by Google.

  • Report – this page is hardcoded with sample data. This is just an example chart, more visualisations of key performance metrics would be added in the actual app.
  • Chart – shows the proportion and number of indexed and noindex pages. Also shows the number of metarobots that are not indicated, this should be fixed. This is a performance metric, as more indexed pages means more landing pages for search. Chart can be downloaded as PDF or image file.

  • Scheduled scans – currently user can set only one scheduled scan per account. A scheduler pings the API endpoint in the app weekly at a specific date and time. Scan is performed on the server, then an email sent to the user when it is done. So he/she can login to view the scanned data. There’s however a bug in the scheduled scan that I’m not yet able to fix.

There’s lots more work to be done on the app and I will continue working on it as a side project 🙂

Web Development Bootcamps in Singapore

I’ve completed Full Stack Foundation at NUS ISS recently and TechLadies bootcamp last October. Here’s my personal review and comparison of the different in-person software development bootcamps that’s available in Singapore. I added General Assembly into the mix even though I did not attend it, but it’s a really popular one that you probably have heard of or considered.

Continue reading →

Crawl & Scrape with Node.js

I’m a bit obsessed with web crawlers – I find them strangely fascinating. Previously I wrote a tutorial about building a web crawler that parses into a .csv file with Ruby, read the tutorial here – Build a Web Crawler in 5 Easy Steps. It’s literally 15 lines of code to get a basic Ruby crawler running.

Now having learnt some Node.js, I decided to write a JavaScript web crawler. I used the node-crawler npm package, similar to the spidr ruby gem. In the Ruby crawler I wrote, Nokogiri is used to parse the page using Xpath. For the Node.js crawler, Cheerio allows you to use jQuery selectors to grab HTML attributes, text, classes and more.

Here’s the repo, or continue reading for the code snippet.

Continue reading →

FaceBook Messenger Bot with Ruby & Sinatra

Hacked together a basic FaceBook Bot that returns the GPS coordinates of a country, city/town or address using this tutorial.

Code for the simple bot is on my GitHub. You can also message the CoBot FaceBook page. It’s hosted on a free Heroku site so it’s so incredibly slow, it was a lot quicker on my local server. Update: Rhys explained to me that the Heroku site sleeps the free dynos and wakes them as required. Since the dynos is sleeping, the message isn’t sent until it wakes.

It’s amazing how you can use ChatBots to gather and serve information. So much you can do with bots.. like create surveys, present promotions to page followers, serve bus arrival times to commuters and much more.

Learning the JavaScript Stack

It seems like JavaScript is the way forward – React, Angular, Node.js, and so on, these frameworks/libraries are so important now. I recently embarked on my journey to learn the JavaScript stack.

Here’s my journey so far..

I did pick React over Angular, but it seems like Angular 1 is taught at the programme so I’ll be learning it. I quite worry about spreading myself thin learning too many frameworks instead of being good at one. But I promise to focus on one soon enough, when I know where I’ll be at.