This time we used digital Post-Its to visualize the Retro.
What went well?
All Blogposts and Peer-Reviews were made in time
Successfull Keycloak integration
What didn’t go well?
Priorities were set elsewhere
Unclear Vision and a lack of motivation resulted in procrastination and stagnation of progress
Communication did not go that well all the time
Define a clear vision of what you want to do
Clear and regular communication
Get Priorities strait
Let everyone present their work regularly
Overall this semester could have gone better. Next time we will make sure to properly pick a topic everyone is interested in and not just pick something on a whim. Also seeing each other in person regularly probably would help aswell. In the end we managed get done with the project.
This week our task was to have a look at some metrics of our code. Therefore we set up a SonarQube instance to scan our project every time we push something to GitLab. Then a pipeline will run and do the actual check. You can find our SonarQube here and an example pipeline checking the metrics here (Sadly it doesn’t runes trough at the moment but the pipeline is configured).
Unfortunately the basic SonarQube version has only a very limited amount of metrics which were useful for us. Therefore we could only use the Security Hotspot metric from SonarQube for this post. This metric shows you parts in your code where you should pay more attention to keep everything secure, based on known risks from other projects. We had 4 Security Hotspots and one Vulnerability. Some of those problems could be fixed easily. The first was that CORS was enabled for every origin, which would allow foreign pages to use our API without trusting them. The next was that the API documentation was accessible with different HTTP methods, which allows a wrong usage. But there were also some problems we couldn’t solve. For example: it is not secure to use the command-line arguments without any validation, but as we don’t use them for custom configuration but SpringBoot requires some args, we ignored the warning. Also there were some warnings concerning authentication, but we could ignore them too because we use Keycloak to manage the users which is secure enough on its own. This merge request shows these changes.
To generate some more metrics we also used the MetricsReloaded plugin for IntelliJ. From that plugin we used the complexity metric to analyze our code. This calculates the cyclomatic complexity for your methods, classes, packages, modules and the whole project. But we only concentrate on the methods. The value represents the number of independent paths which exist trough your function. Before the refactoring we had some methods with a very high value from 9 – 14. But it turned out that this code was a leftover from the static test data we put directly into our code. Therefore this might be a good chance to remove those old lines and put everything into the correct place. This WIP merge request shows the beginning of this refactoring. Also these changes could reduce the values to 4 – 7.
Today we will introduce you to the bridge pattern. A software design pattern “is a general, reusable solution to a commonly occurring problem within a given context in software design. […] It is a description or template for how to solve a problem that can be used in many different situations.” (Wikipedia) The bridge pattern is meant to “decouple an abstraction from its implementation so that the two can vary independently” (Wikipedia).
We chose the bridge pattern, because we already implemented it. Each API request-response pattern is defined in an interface and implemented in a separate class. This resembles the degenerate bridge pattern as each abstraction comes with only one implementation. A closer look however reveals that all abstractions extend the interface API. The concrete implementations implement the shared interface Controller. The UML diagram shows a cleanly implemented bridge pattern.
This helps us a lot, because we can define the API definitions separate from the controller, which makes the controller classes much more readable and focused on the underlying functionality.
in this week, we had to refactor an example project to practice avoiding code smells and overall programming.
In this special case every single of our project members had to do the task on his own. This ensures that even some of the lesser experienced team members get some practice and better code quality.
In the following everyone had to use a separate repository and commit every change with the appropriate code smell. In addition to that, unit tests had to be written to insure proper functionality after refactoring.
We all used IntelliJ for our refactoring, which has a lot of useful functions to refactor your code. For example you can “rearrange code”, “optimize imports”, “cleanup” and “reformat code” automatically on every commit.
Edit: After a long time of trying different things to get the cucumber tests to work. We finally get it by replacing Ruby as the language for the step definitions with Java which solves all the dependency issues we had with Ruby before.
The following video shows a run of most of our feature-files. Even if not all tests succeed, the code is working. The failing tests are caused by some problems with our test data, but using better data should fix those tests too.
After choosing our next use-cases last week, this week we spend our time on predicting how long it would take us to fulfill these goals. In order to do this we used the function points methods to calculate the time needed for each use-case. A funtion point is a unit for the complexity of a single piece of your software.
At first we started by calculating the function points for the already implemented use-cases. Even after searching around the internet we are still a bit confused how exactly the vaule gets determined but we tried our best to be as accurate as possible. Unfortunatly three of the old use-cases aren’t completely finished so they became outliners which doesn’t fit the model very well. Therefore we could only use two of the old use-cases to generate a function to calculate the time used per function points. Maybe we should adjust our graph when we finished one or two more use-cases to get more reliable data.
After that we calculated the function points for the use-cases planned for this semester and estimated the time based on the trend of the old use-cases. The current status of our function point estimation can be found in the document section of our GitLab project. The following diagram shows all our use-cases together.
See you next week.
Unfortunately we managed to convert one youtrack day into 24 hour instead of 8 so our whole result wasn’t helpful at all. Therefore we updated our calculation again. But this even didn’t changed the fact, that we only have “View our data”, which may has consumed more time then required as we had a lot of setup work to do for this use-case, and “Registration and Login” as basis for our estimation.
We also added all the remaining use-cases to our calculation and get an result of 135 hours of work to do for this semester. Although we only spend 127 hours of work last semester in developing the first use-cases, we are still optimistic about reaching our goal until the end of semester two. Because our setup is now complete we can fully concentrate on implementation and won’t loose that much time as we did in the beginning.
At SaSEp we have a vision of Clairvoyance’s look and power by the end of the course. However, we in this constellation as a team have never refined a project to this stage of completion before. There are many things that could go wrong. We do not quite know what we are doing, but we know there are risks involved. We have collected and evaluated the most pressing risks to our successful completion of this course in the following list:
To further evaluate these risks, we have looked into the past. Last semester we did decent progress on several use cases. Since we track the time we spend on each ticket in youtrack, the report behind this link is always up-to- date. You can see our time spent from inception the the 29th of April 2020 below:
With this information in hand, we at SaSEp feel well equipped to manage the risks this project involves competently and to make Clairvoyance a success.
We want to make our Clairvoyance platform more powerful, more personal and we want that our users can contribute. Everyone should be able to build his dream league with custom teams and players to compete against each other.
Not just predictions, but the latest news around esports on our platform.
We want to write and publish exclusive news articles on our platform to inform our users about the latest news in esports. Eventually everyone can apply to become a news author.
Complete rewrite of our backend
We did some mistakes last semester which result in a very overcomplicated backend structure and bad maintainability. So in the next few weeks, we will reorganize our backend, reformat our code base and add useful extensions like keycloak-spring and lombok. This will make our lives easier and the application much more safe.
Better planning and organization
Planning is key and so is preparation. We want to rethink our business process and our scrum workflow to act faster and prevent inconsistence and bad practices.
That is it for now. Thank you for following our process again and good luck and successful builds to all of you.