sábado, 19 de octubre de 2019

Common Language Runtime

This article compares the runtime environment of two languages that seem to be similar Java and C#. As far as I know, both are designed to be object oriented languages. For almost every semester at the Tecnológico de Monterrey, I have worked quite a bit with Java, I did use lots of the libraries from JVM, but I wasn´t aware of its limitations. As of now, I have not been assigned with a task that is complicated enough to require the low level interactions such as pointers or memory management in the professional area. However I did understand the usefulness of such and it did ring a bell about the possible limitations for library development. The article seems to emphasizes that it has limitations on pretty much everything compared to .NET's CLI. 
The way JVM builds data groups, analyses the syntax directly and them validates it as a new object before creating the datatype sounds extremely resource consuming. In the same way the author explains that certain programming paradigms are not possible due to the way JVM is built. Things that seem to be attached to the mathematical nature of a programming language such as recursion has to be implemented artificially when developing a compiler on JVM. Oracle did go a bit too far on their "we only compile proper code" adding too much overhead to their  environments. 
Well it seems the author is completely convinced that CLI is superior to JVM. As far as the course goes, I don't have any complaints towards the tools that we have been using in the .NET framework. C# has some hip tricks that seem to make writing the compiler considerably easier.  It is interesting that at this point, I start to have a better understanding of why did so many highly regarded professors and Stackoverflow users feel some kind of adversity towards Java. 

martes, 8 de octubre de 2019

Implementing a web Language

I have some experience developing on server-side web applications. Back in my old job at AstraZeneca I had the task of giving security maintenance and updates to a discounts application, this was mainly done through a Node-JS middleware that communicated with the real back-end technology which was mostly implemented in the framework Spring for Java.
I was quite impressed by how different the paradigm is as it was my first time messing around with a "scripting" language. As the lecture states, the Node/Express framework is heavily focused on handling outer requests, perhaps  doing some processing to it and forwarding it. And I feel like implementing a lighter language with these specifications is more compĺex than developing a full scale console based language. One of my tasks was also implementing cross-framing with another of AZ's web applications. There is so much to consider in the security side for this type of languages. Cross-site scripting , click-jacking, header validations, watching for HTML injections among many others.

I think it was a good decision to provide a kernel to the students as it seems that dealing with HTTP requests at low level seems difficult enough that I can't imagine how do you even setup and test the most basic functionalities for this. It would be necessary to have some advances knowledge of the required web protocols I believe. Perhaps a project which focuses specifically on building a request framework for a secondary language would be as interesting and could have an insight into this obscure (for me) knowledge.

As tough of a task as it is, it sounds like something quite interesting and worth having a look at, specially at this point of time where a lot of companies are basing their desktop client applications on top of a web application, there are even some frameworks for this such as Microsoft's Electron which I believe is having a modest success. As much as I despise looking at some <a><{user.song}></a> bug on my Spotify desktop client, it seems that the experience that this project can potentially provide is proving to be more and more relevant in the IT industry.

jueves, 3 de octubre de 2019

S-Form Interpreter

Back in the day I was having fun with Scheme Lisp for my programming languages course, I came to wonder the reason for which whoever designed the language decided to make function calls in such an odd way. In this reading I just learned that the odd way is actually called S-Expression and it seems to be that way because it is easier to implement for the language developers. After many headaches caused by the ridiculous amount of parentheses in the nested functions of my final project, I gained adversity towards whoever designed Lisp. At that time I thought it might had to do with the recursive nature of the language. 

It seems that I was right in some way since the S-Expressions come naturally at the moment of applying recursion correctly for nested functions since the parser reads relying on the context defined in Ruby as it is shown in section 2.3.
Managing contexts seems to be one of the most useful implementations within the framework, having a quick access to data outside of the function being currently parsed seems to be something that is very characteristic of Lisp-like languages. In the article it is stated that the framework is quite able to support simple functional languages using the basic functionalities such as defining, passing arguments and setting functions. 
This article gave me a better understanding on why is it that S-Expressions makes parsing simpler. It is easy to see that on the Ruby code one can parse through the expression taking the first argument of the S-Expression (that is the function)  and defining what to do with each of the remaining arguments treating them as simple indexes of the obtained regular expression. Things get complicated when dealing with recursion and scopes, and as the author states, there are some basic optimizations for recursion that cannot be implemented with the chosen approach.

Grace Hopper: The Queen of Code

I believe that my knowledge on the history of computing is somehow limited at this moment of my life. Perhaps it is because most of the recent developments in computer science were achieved by the employees of corporations which makes the history of it slightly dull or hidden. However there is always somebody that is outstanding enough to spice up the history of things. I am very impressed that a woman achieved to come out as one of the pioneers of this discipline. If women in the IT sector have it quite hard nowadays imagine being a Navy officer and a computer engineer. In the middle of WWII. Perhaps, it was the war that forced officers and engineers to look the other way and leave behind their beliefs towards the role of women in the western society of the 20th century.

I do agree with the article in which states that somebody who designed the compiler had the ability to think ahead of the capabilities of machines for calculation. Naturally, somebody who had such importance so early in the development of computer systems, would create some myths around them such as the one of the moth. I find this myth to be quite amusing.

I believe the engineering world needs to encourage women to get involved as their contributions are so important they could enable a whole age-changing industry and science. Grace is a great example of strength and determination against all odds.

sábado, 7 de septiembre de 2019

Internals of GCC

Podcast overview: Internals of GCC

  an interview with Morgan Deters

 

In this post we will take a look of what I have understood in this podcast recorded in the Software Engineering radio website. 
A couple of semesters ago I had my Advanced Programming course which focuses on some of the low level appliances that are provided by the language C. In the first few lectures, I was taught that libraries needed to be linked manually in console. Also I learnt that I could write a couple of instructions to the compiler so that it comes up with a couple of optimizations at run time.
I'm still unsure about the functionality of all of the layers that my C program has to go through in order to be transformed into machine code. In this podcast I have got a nice insight of how a tool such as GCC manages to do such optimizations.

From what I understood, first of all it seems that GCC transforms code into a standardized tree of instructions that is architecture independent. So every language supported by GCC converges into a same set of instructions, depending of the architecture that is being used. A "front end" compiler needs to be built for each of the languages, since high level expressions are treated differently. Additionally, this language-specific front-end needs to be attached with the so called "middle-end" to the architecture specific developments at the back-end. Through the podcast, there is a question about the optimization support for multi-core architectures in the GCC collection. My friend and classmate Rodrigo Garcia made some speculation about the lack of support from GCC to for this kind of architectures in his very own post. Personally I believe that there will be support at some point, however the amount of variables to be considered to build a safe compiler scales up the difficulty of building such optimizations as resource sharing, as far as I know, is one of the fundamentals of multi-core processing. It is also quite risky and difficult to implement.
 One thing I have found to be interesting is the assignation of registers at the assembly level. It seems that this needs to be done by GCC since a processor actually assigns few of them for the execution of a program. I believe that this process might play a role in making multi-processor optimization as limited as it is.




viernes, 23 de agosto de 2019

Making Compiler Design Relevant for Students

In this entry I will briefly share my thoughts about the lecture "Making Compiler Design Relevant for Students who will (Most Likely) Never Design a Compiler" by Saumya Debray. 

At the very beginning of my undergraduate degree, I took my introductory programming course with Ariel, the same lecturer of this course itself. At some point he mentioned that he was also the teacher of the Compiler Design course. At that time I had a very vague idea of what a compiler was, and I thought that creating a compiler was all about writing in assembly language, making some black magic and suddenly you would have a compiler for your very own programming language. At this moment, after several years of gaining knowledge and experience on the computational sciences, I don't longer think it is all black magic, however I believe that I will never build a compiler for other than recreational purposes. And this why this reading suits me.

The main argument of the author is that there are several translation problems that a student can solve by using some knowledge and techniques they have learnt in a compiler design course. His main example is a task that consists in translating LaTeX components into HTML, which it seems to be something very useful by itself. One other argument is the mastery of lex and yacc, a set of tools that as far as I know are the ones to be used during this course. I believe it will always be good to get to know a new tool. In my very small experience as a software developer at the industry label, managers a lot of time gives you tasks without having in mind what is your experience or expertise. For example I had to learn very quickly the principles of cross framing and scripting security in a NodeJS application, and I swear if I had minimal experience in anything that had to do with the bare theory of those principles, such task would have taken perhaps a couple of hours instead of a couple weeks as it was the case. Maybe someday, even if I don't become a compiler designer myself, I will find some of the knowledge and theory behind it to be handy, and I think that's enough of a reason to give it a shot and get some value out of the course. After all, I believe that I won't we working with eighty percent of the topics that have been given to me during my undergraduate degree.

martes, 13 de agosto de 2019

Personal Introduction

First entry: A personal introduction

Hello reader, this is my first entry for my Compiler Design's course blog. My name is Valentín Ochoa, I am currently 23 years old and I am an aspiring computer scientist.
I have programmed since my late days of high school and I have always found it to be something that is quite thrilling. I had the luck to be a CS student at the University of Toronto for a year.

Some of my hobbies include playing the piano, scuba diving, rapid chess and video games.
I picked off the piano since I was a little kid, my parents kind of pressured me into it, but I really love that they did so. I nearly turned into the National Conservatory to pursue a career as a pianist when I was 16, but I thought that piano would be in a better place as my hobby. 
I picked off scuba diving because my father is a big ocean lover and we frequently do fishing trips together, I brought up scuba diving and he decided to pick up a course for us so we get a license. Even though we don't do it as frequently, as a diver you get to see some views that are out of this world.

I frequently watch horror movies with my best friend. At the beginning we did it to have a laugh as she solely screamed for the cheapest jump-scare in the movie theater. However, we have found a couple of films that are worth watching for more than making fun of my friend. 

As for the course, I'd like to learn how to build a compiler so I can make some silly or creative programming language for entertainment. Lately I have had some interest in a couple esoteric languages such as Piet, ArnoldC or Emojicode. Maybe after taking the course I can try to build something alike.

Common Language Runtime

This article compares the runtime environment of two languages that seem to be similar Java and C#. As far as I know, both are designed to b...