Intro

Hi, I'm Max, I'm a former mechanical engineer now on my way to becoming a professional developer. I realized that I'm starting to lose track of all the little bits and pieces of knowledge necessary on this path and therefore decided to create this database. It will be a personal collection of notes that might or might not make sense to anyone else but I thought I might as well make it public just in case someone could find it useful.

If you have corrections or suggestions, don't hesitate to create a pull request in the corresponding GitHub repo.

If you would like to create a base like this yourself, shoot me a message and I'll happily guide you through the process (it's essentially an mdbook, being built by a GitHub action, linked to a StackEdit Markdown editor, I'm working on writing a quick HowTo).

Best Regards! P.s.: I will leave a Hemingway quote here which I found motivational: "There is nothing noble in being superior to your fellow man. True nobility is being superior to your former self" Hemingway quote background

Acknowledgments

This has been made using mdbook. Find it on GitHub.

Shout out to David Gasquez' handbook for the inspiration to make this.

Peaceiris github action made the implementation of this super easy.

And the editing works like a charm using StackEdit.

Web development

I let Kamran Ahmeds web developer roadmap be my main guide in a structured learning approach.

Therefore, my first website was a simple tribute page to french politician Aristide Briand in HTML with minimal CSS and no JavaScript.

Tribute Page

I am too curious about the whole process and am therefore alternating between the Front-End and Back-End path. I have since built more complex sites with more JavaScript code or built from a Front-End framework (VueJs). But also a web-scraper project built with Django and MySQL. I will add and describe these projects in the projects section of this website.

Front-end

The Front End

CSS

Cascading Style Sheets (CSS) is a stylesheet language used to describe the presentation of a document written in HTML or XML (including XML dialects such as SVG, MathML or XHTML). CSS describes how elements should be rendered on screen, on paper, in speech, or on other media.

Good to know

Mobile first!

Sources

Template

Design Guidelines

A summary of Don't make me think:

  • If you can't make something self-evident, you at least need to make it self-explanatory
  • Take advantage of conventions
  • Create effective visual hierarchies
  • Break up pages into clearly defined areas
  • Make it obvious what's clickable
  • Eliminate distractions
  • Format content to support scanning
  • Clarity trumps consistency
  • The more important something is, the more prominent it is
  • Things that are related logically are related visually
  • Things are nested visually to show what's part of what
  • Use plenty of headings and don't let them float
  • Keep paragraphs short
  • Use bulleted lists
  • Happy talk must die
  • Instructions must die
  • Navigation should be on every site and in the same place except for forms
  • All web users expect the side ID to be a home button
  • every page should have a search box or a link to the search
  • For Search boxes avoid:
    • Wording outside the norm (Quick Find instead of Search)
    • Instructions (no one cares)
    • Options (give it on the results page if the first shot didn't work)
  • Give low level (3rd or 4th level) navigation the same attention
  • The most common failing of "you are here" indicators is that they are too subtle
  • Too subtle visual clues are a very common problems because subtlety is a trait of good design. Most users are too much in a rush to notice though
  • Breadcrumbs are good for navigation:
    • At the top
    • with > in between and > A bold last item
  • Try the trunk test (page 62)
  • The home page needs to answer 5 questions:
    • What is this?
    • What can I do here?
    • What do they have here?
    • Why should I be here and not somewhere else?
    • Where do I start?
  • Don't use small-low contrast type
  • Don't put labels inside form fields
  • Preserve the distinction between visited and unvisited links
  • Don't float headings in between paragraphs (have them closer to the text that follows than the one that precedes)

How we use websites:

  • We don't read pages. We scan them.
  • Most of the time we don't choose the best option- we choose the first reasonable option
  • All web users expect the side ID to be a home button

Testing:

  • The antidote for religious debates is testing
  • Every web development team should spend one morning a month doing usability testing
  • Use services like usertesting.com
  • Focus on fixing the most serious problem first

Mobile:

  • Allow zooming
  • Link to relevant pages, not to Homepages
  • Give an option to view the desktop version of the site
  • Make sure visual hints (affordances) don't get lost in the mobile version
  • Remember that speeds on mobile are unreliable (make mobile sites small)
  • Make it possible to change font size (Really?)

HTML

The Hypertext Markup Language defines the meaning and structure of web content.

Sources

Vue

Installation

Manual

Import script tag:

<script  src="https://unpkg.com/vue@3.0.0-beta.12/dist/vue.global.js"></script>

Organize the instances within your html with:

<!-- Insert the componen (the main data is directly accessible) -->
<component-name 
  :dataItemBoolean="dataItemBoolean" <!--data to transmit from main to component --> 
  @first-component-function="firstFunction" <!-- listens to componentfunction and triggers main function -->
  @second-component-function="secondFunction">
</component-name>

<!-- Import App -->
<script  src="./main.js"></script>

<!-- Import Components -->
<script  src="./components/ComponentName.js"></script>

<!-- Mount App -->
<script>
	const  mountedApp = app.mount('#app')
</script>

inside main.js:

const app = Vue.createApp({
    data() {
        return {
            dataItemArray: [],
            dataItemBoolean: true
        }
    },
    methods: {
        firstFunction(userInput) {
            this.dataItemArray.push(userInput)
        },
        secondFunction(userInput) {
            const index = this.dataItemArray.indexOf(userInput)
                if (userInput > -1) {
                    this.dataItemArray.splice(userInput, 1)
                }
        }
    }
})

inside components.js:

app.component('component-name', {
  props: { //outside variables emitted with .$emit and bound by :
    dataItemBoolean: {
      type: Boolean,
      required: true
    }
  },
  template: //the html implementation of the component
  /*html*/
  `<div class="css-class1">
    <div class="css-class2">
      <div class="css-class3">
	HTML implementation
      </div>
    </div>
  </div>`,
  data() {
    return {
        product: 'Socks',
        //the components data, arrays,booleans etc.
        ]
    }
  },
  methods: {
      firstComponentFunction() {
          this.$emit('first-component-function', this.variants[this.selectedVariant].userInput)
      },
      secondComponentFunction() {
        this.$emit('second-component-function', this.variants[this.selectedVariant].userInput)
      },
  },
  computed: { //data that has been worked with
      title() {
          return this.part1 + ' ' + this.part2
      }
  }
})

Vue CLI

npm install -g @vue/cli
vue create <app-name>
# or
vue ui

Vue - Axios

Axios handles API calls within vue. It can be installed vie the vue-ui within the project dependencies. It is preferred to have a single Axios point managing all the API calls (Instead of each session creating a new instance). Therefore, there should be an ./services/APIServices.js file that handles the session for all views and looks somewhat like this:

import axios from 'axios'

const apiClient = axios.create({
  baseURL: 'Full root URL of API Server',
  withCredentials: false,
  headers: {
    Accept: 'application/json',
    'Content-Type': 'application/json'
  }
})

export  default {
  getEvents() {
    return apiClient.get('/events')
  },
  getEvent(id) {
    return apiClient.get('/events/' + id)
  }
}

In the view, this would need to be imported and handled like this:

<script>
import APIService from '@/services/APIService.js'

export default {
  data() {
    return {
      events: null
    }
  },
  created() { //<- A Lifecycle hook that fetches the data when the view is created
    APIService.getEvents() //<-- calling the method we wrote in our js file
      .then(response => { //<-- if the call is successful put the data into "events"
        this.events = response.data
      })
      .catch(error => { //<-- log an unseccessful call to the console
        console.log(error)
      })
  }
}
</script>

Databases

SQL

Structured Query Language

Create Table:

CREATE TABLE animals (
     id MEDIUMINT NOT NULL AUTO_INCREMENT,
     name CHAR(30) NOT NULL,
     PRIMARY KEY (id)
);

Hosting

Server

  • Run local testing server with Python (Link)

apply updates

  1. Pull and overwrite local changes
git add .
git stash
git pull
  1. compile static files
python manage.py collectstatic
  1. reload nginx and gunicorn
sudo systemctl reload nginx
kill -HUP 944 #<---944 = pid of Gunicorn main process/
ps -aux | grep gunicorn #<--- to find pid

Nginx

Description

Server config

cd /etc/nginx/sites-available/
sudo nano #project-name <- create nginx config file there
server {
    listen 80;
    server_name XXXXXXXX; #<--Server Ipv4 address
 
    location = /favicon.ico { access_log off; log_not_found off; }
 
    location /static/ {
            root /path_to_project/<project_folder_name>; #<- folder containing static folder, normally same folder with manage.py 
    }
 
    location / {
            include proxy_params;
            proxy_pass http://unix:/path_to_project/<project_name>/<project_name>.sock;
    }
}

To enable the above file, create a link to the sites-enabled folder by running the code below:

sudo ln -s /etc/nginx/sites-available/myproject /etc/nginx/sites-enabled

Check the configuration file with:

sudo nginx -t

and run it with:

sudo service nginx restart

GUnicorn

Description

Server config

Install gunicorn and run the daemon worker with this command:

gunicorn --daemon --workers 3 --bind unix:/home/ubuntu/<project_name>/<project_name>.sock <project_name>.wsgi

The command has to be executed in the parent directory of the Django project (the directory that contains manage.py)

--daemon: Leaves the process running in the background. Should be substituted by Supervisor?

-- workers : see here how to set the right amount of workers. The rule of thumb is (cores x 2) + 1

--bind gunicorn to the unix socket set in the nginx configuration

Django

Start the project: Prerequisite: sudo apt install python3-django

$ django-admin startproject mysite 
#add a . after the project name to use current directory

Start the app Prerequisite:

   $ virtualenv env
   $ source env/bin/activate
   $ pip install django

$ python manage.py startapp myapp

Make migrations (even if you're not using a database, you need to populate Django's sqlite file)

$ python manage.py migrate

add app and template directory to settings.py:

INSTALLED_APPS = [ 'myapp.apps.MyappConfig', ... ] TEMPLATES = [{ 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [BASE_DIR / 'templates'], ... },]

Create an admin page

$ python manage.py createsuperuser

Written with StackEdit.

Testing Rest Framework

Coverage

pip install coverage
coverage run --source=. manage.py test
coverage report

Populate objects

Try Model bakery for automatic generation using the models file or faker to generate more comprehensive data manually

Structure

To better structure the tests, we delete the tests.py file in the app folder that has been created with the startapp command and give each app its own testing folder. Within this folder, we need an __init__.py file to make python recognize it as a module. Then, it makes sense to structure tests for each module (model, views, setup ...). The set up can be inherited. So the Structure should look something like this:

Project folder
-  App folder
-- tests
--- __init__.py
--- test_setup.py
--- test_model.py
...

Documentation

Good Table to get an overview for the API structure: |Endpoint |HTTP Method| CRUD Method| Result |--|--|--|--| puppies| GET |READ |Get all puppies puppies/:id |GET |READ |Get a single puppy puppies | POST |CREATE |Add a single puppy puppies/:id |PUT |UPDATE |Update a single puppy puppies/:id |DELETE |DELETE |Delete a single puppy

Postgres

When checking with a Postgres DB, first make sure the local DB is running:

$ sudo -i -u postgres // switch bash to user postgres
$ psql // start local postgres server

To make Django connect through the iden method (no PW if the user name is correct) there can be no PW, Host or PORT option in the Django DB setup

Cors

Cross origin resource sharing is a security relevant aspect when Frontend and Backend communicate with each other from different ressources (urls, ports etc.). This attack vector can be mitigated by including specific information in the http header. For DRF, the cors middleware is suggested.

$ pip install django-cors-headers

and add it to the settings.py:

 INSTALLED_APPS = [
	...
	'rest_framework',
	'corsheaders', # new
	...
]
MIDDLEWARE = [
...
'corsheaders.middleware.CorsMiddleware', # be sure to set it above Common
'django.middleware.common.CommonMiddleware',
...
]

CORS_ORIGIN_WHITELIST = (
'http://localhost:3000', # Frontend port, React in this case
'http://localhost:8000', # Django port
)

Authorization

To manage authorizations in the api, the first step is to include the DRF urls into the projects url file. project/urls.py:

urlpatterns = [
    path('admin/', admin.site.urls),
    path('api/v1/', include('posts.urls')), # versioned API endpoint
    path('api-auth/', include('rest_framework.urls')), # adds authentication (the actual url is unimportant)
]

Permissions can be granted on project, view and object level. So for example to restrict permissions in a view to authenticated users this would have to be added:

from rest_framework import generics, permissions

class ExampleView(generics.ListCreateAPIView):
	permission_classes = (permissions.IsAuthenticated,) # The colon at the end is important
	...

To change permissions on a project level, edit the projects settings.py. Options are

  • AllowAny (default)- any user, authenticated or not, has full access
  • IsAuthenticated - only authenticated, registered users have access
  • IsAdminUser - only admins/superusers have access
  • IsAuthenticatedOrReadOnly - unauthorized users can view any page, but only authenticated users have write, edit, or delete privileges:
REST_FRAMEWORK = {
    'DEFAULT_PERMISSION_CLASSES': [
        'rest_framework.permissions.IsAuthenticated', # new
    ]
}

Custom Permissions

It is possible to create custom permissions for a project. To do that, we create a permissions.py file in our app folder. An example for a custom permission could look like that:

from rest_framework import permissions

class IsAuthorOrReadOnly(permissions.BasePermission):
    def has_object_permission(self, request, view, obj):
        # Read-only permissions are allowed for any request
        if request.method in permissions.SAFE_METHODS:
	        return True
        # Write permissions are only allowed to the author of a post
        return obj.author == request.user

Here, we inherit Djangos BasePermissions class and overwrite its has_object_permission method to only grant write access to the author of a piece. First we check if the request method is safe (aka read only aka GET, OPTIONS and HEAD). If yes, permission is granted. If not, it is checked if the user is the author. We then have to import this permission class into our views.py file to enforce it on a view level:

from .permissions import IsAuthorOrReadOnly # new
...

class PostDetail(generics.RetrieveUpdateDestroyAPIView):
    permission_classes = (IsAuthorOrReadOnly,)
    ...

These methods work well for detail pages because it modifies the object permission. If it should be done for a list or collection, the queryset has to be overridden. More on that here

Authentication

While Authorizations manage permissions, Authentication manages login, logout and user management (create delete etc.). While Django uses a session based cookie, this is not possible for a REST API as the S in Rest stands for stateless. Therefore, every request has to be fully independent. The solution is to add a unique identifier into the request. This can be a token of any kind. DRF offers: basic, session, token, and default.

  • Basic: The string username:password is encoded into base64 and send under "Authorization" in the header. This should only be done through a secure https connection because the credentials can easily be stolen and reused.
  • Session: After basic authentication, a cookie is generated on both sides that will further on be used to generate an ID which will be sent in the header. The database is only hit once for credentials and they are only in the first request, however managing these sessions for multiple front ends and many users is challenging and it is a stateful approach which violates the REST principle. This is therefore not advised.
  • Token: Upon login, a token is created and stored only on the user side. It can be stored either in localstorage or as a cookie. The current best practice is to save it as a cooke with the httponly and Secure flags. Localstorage does not automatically add the token to the header and keeping it in both is vurlnerable for XSS attacks. The token is not stored server side. Additional features like token expiration can be set. This is currently considered the best approach. To add it 'rest_framework.authentication.TokenAuthentication', has to be added to the DEFAULT_AUTHENTICATION_CLASSES in settings.py and 'rest_framework.authtoken', has to be added to the INSTALLED_APPS
  • Default: The default for DRF is Session and Basic. The session is used for the browsable API while basic is used for the API itself. If we would add the default class to the settings.py file to make it explicit, it would look like this:
    REST_FRAMEWORK = {
    'DEFAULT_PERMISSION_CLASSES': [
    'rest_framework.permissions.IsAuthenticated',
    ],
    'DEFAULT_AUTHENTICATION_CLASSES': [ 
    'rest_framework.authentication.SessionAuthentication',
    'rest_framework.authentication.BasicAuthentication'
    ],
    }
    

User Endpoints

To have safe user endpoints to log in and out, we will use the third party packages dj-rest-auth and django-allauth.

dj-rest-auth (Login Logout, Password reset)

$ pip install dj-rest-auth

#settings.py
INSTALLED_APPS = [
	...
	# 3rd party
	...
	'dj_rest_auth',
]

#urls.py
urlpatterns = [
	...
	path('api/v1/dj-rest-auth/', include('dj_rest_auth.urls')), # new
]

The login can then be found under http://127.0.0.1:8000/api/v1/dj-rest-auth/login/ Further there are .../logout, .../password/reset and password/reset/confirm.

django-allauth (User Registration)

$ pip install django-allauth

# settings.py
INSTALLED_APPS = [
	...
	'django.contrib.sites',
	# 3rd-party apps
	...
	'allauth',
	'allauth.account',
	'allauth.socialaccount',
	...
	'dj_rest_auth.registration',
]
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' # this setting will send emails to the console during development
SITE_ID = 1 # allauth uses djangos sites framework (host multiple sites from one project), we only host one site but specification is needed anyway

# urls.py
urlpatterns = [
	...
	path('api/v1/dj-rest-auth/', include('dj_rest_auth.urls')),
	path('api/v1/dj-rest-auth/registration/', include('dj_rest_auth.registration.urls')),
]

User API endpoint

Now that we can authenticate, we ant to add a user endpoint to our api to list all and individual users. Adding an endpoint always involves creating the serializer, creating the view, creating the url route. To add djangos usr model to the serializer, we write it like this (app/serializers.py):

# app/serializers.py
from django.contrib.auth import get_user_model # new
...

class UserSerializer(serializers.ModelSerializer):
	class Meta:
		model = get_user_model()
		fields = ('id', 'username',)


# app/views.py
from django.contrib.auth import get_user_model # new
from .serializers import PostSerializer, UserSerializer # new

...
class UserList(generics.ListCreateAPIView): # new
	queryset = get_user_model().objects.all()
	serializer_class = UserSerializer
class UserDetail(generics.RetrieveUpdateDestroyAPIView): # new
	queryset = get_user_model().objects.all()
	serializer_class = UserSerializer

# posts/urls.py
...
from .views import UserList, UserDetail
urlpatterns = [
	path('users/', UserList.as_view()),
	path('users/<int:pk>/', UserDetail.as_view()),
	...
]

Viewsets and Routers

At this point we can see quite a bit of repetition. The detail view and the list view look exactly identical. We can use viewsets which combine these funtionalities for the sake of a worse readability. To replace the views we have so far, we rewrite views.py like this:

# posts/views.py
from django.contrib.auth import get_user_model
from rest_framework import viewsets # new
from .models import Post
from .permissions import IsAuthorOrReadOnly
from .serializers import PostSerializer, UserSerializer

class PostViewSet(viewsets.ModelViewSet): # new
	permission_classes = (IsAuthorOrReadOnly,)
	queryset = Post.objects.all()
	serializer_class = PostSerializer
class UserViewSet(viewsets.ModelViewSet): # new
	queryset = get_user_model().objects.all()
	serializer_class = UserSerializer

Just like viewsets unify list and detail views, routers can simplify url routes.

#before
from django.urls import path
from .views import UserList, UserDetail, PostList, PostDetail 

urlpatterns = [
    path('users/', UserList.as_view()), 
    path('users/<int:pk>/', UserDetail.as_view()), 
    path('', PostList.as_view()),
    path('<int:pk>/', PostDetail.as_view()),
# after
from django.urls import path
from rest_framework.routers import SimpleRouter
from .views import UserViewSet, PostViewSet

router = SimpleRouter()
router.register('users', UserViewSet, basename='users')
router.register('', PostViewSet, basename='posts')
urlpatterns = router.urls

It's not really clear if writing a little less code is worth the tradeoff in readability and customization. A good rule of thumb is to start with views and URLs. As your API grows in complexity if you find yourself repeating the same endpoint patterns over and over again, then look to viewsets and routers. Until then, keep things simple.

Schemas and documentation

Schemas

To add this, we have to install PyYAML and uritemplate

$ pip install pyyaml uritemplate

then create a static computer readable schema with:

$ python manage.py generateschema > openapi-schema.yml

or we create a dynamic version that will be served to a url. For this, we have to change our projects urls.py like this:

...
from rest_framework.schemas import get_schema_view # new


urlpatterns = [
	...
    path('openapi', get_schema_view( # new
        title="Blog API",
        description="A sample API for learning DRF",
        version="1.0.0"
    ), name='openapi-schema'),
]

Documentation

To make it more human friendly, we install drf-yasg (however SwaggerUI and ReDoc are viable alternatives) and add it to our settings.py and urls.py:

$ pip install drf-yasg
# config/settings.py
INSTALLED_APPS = [
	...
	# 3rd-party apps
	...
	'drf_yasg', # new
	...
]

# config/urls.py
...
from rest_framework import permissions # new
from drf_yasg.views import get_schema_view # new
from drf_yasg import openapi # new

schema_view = get_schema_view( # new
    openapi.Info(
        title="Blog API",
        default_version="v1",
        description="A sample API for learning DRF",
        terms_of_service="https://www.google.com/policies/terms/",
        contact=openapi.Contact(email="hello@example.com"),
        license=openapi.License(name="BSD License"),
    ),
    public=True,
    permission_classes=(permissions.AllowAny,),
)

urlpatterns = [
	...
    path('swagger/', schema_view.with_ui( # new
        'swagger', cache_timeout=0), name='schema-swagger-ui'),
    path('redoc/', schema_view.with_ui( # new
        'redoc', cache_timeout=0), name='schema-redoc'),
]

Tutorial summary

Rest - Representational state transfer Api - application programming interface

pip install djangorestframework

and add it to the Installed apps in settings.py:

INSTALLED_APPS = [
     ....
    'rest_framework',
    ....
]
  1. The rest framework needs a serializer. The serializer explains how to format the db data into formats like json etc. in a get request or how to format a post request for data that goes into the db. The normal serializer class is written the same way as the model class with special methods to handle requests (get, post etc.). Because of these similarities in structure and API duties, the class can be simplified with Modelserializers. This can be as simple as:

    from rest_framework import serializers
    from app.models import Modelname
    
    class  AppSerializer(serializers.ModelSerializer):
    	class  Meta:
    		model = App
    		fields = ['id', 'title', 'other', 'important', 'fields']
    

    The implementation can be checked in the django shell by importing the serializer and calling $ print(repr(Serializername()))

  2. Just like the Serializer often has similar tasks, the API IO is also normally fairly similar. Therefore, the view necessary to make the api connect with the outer world, can be simplified using generic class based views. An API that retrieves all elements of a database with a get request can be implemented like this:

    from App.models import Modelname
    from app.serializers import AppSerializer
    from rest_framework import generics
    
    class SnippetList(generics.ListCreateAPIView):
       queryset = App.objects.all()
       serializer_class = AppSerializer
    

    and an API that retrieves, stores or deletes a single db entry:

    class AppDetail(generics.RetrieveUpdateDestroyAPIView):
        queryset = App.objects.all()
        serializer_class = AppSerializer
    

    More about generic views can be found here. Note: in the urls file, class based views need the .asview method attached. To be able to use api suffixes (.json, .api) remember to modify the urlpatterns with

    urlpatterns = format_suffix_patterns(urlpatterns)
    
    1. Permissions can be handled with the permission classes in the generic views (rest_framework permissions need to be imported):
    class  SnippetDetail(generics.RetrieveUpdateDestroyAPIView):
    	permission_classes = [permissions.IsAuthenticatedOrReadOnly,
    	IsOwnerOrReadOnly]
    

    In this example, "IsownerOrReadOnly" is a custom method defined in a permissions.py file. Documentation can be found here Authentication is built into the Rest framework and just needs to be included into the project level urls.py file:

    urlpatterns += [
        path('anypath/', include('rest_framework.urls')),
    ]
    
    1. Pagination and Hyperlinking in a nutshell: Let the serializer do the work. Make sure URL names fit and let the serialiser class inherit the HyperlinkedModelSerializer.

mkdir and git init and git set upstream (or clone repo) virtualenv gitignore (add virtualenv) pip install django django-admin startproject config .

(from app_engine_deploy) Create GCP project add PostgreSQL server setup Cloud SQL Proxy migrate

python manage.py startapp appname add app to installed apps: 'appname.apps.AppnameConfig',

create model

from django.db import models
from django.contrib.auth.models import User


class Post(models.Model):
    author = models.ForeignKey(User, on_delete=models.CASCADE)
    title = models.CharField(max_length=50)
    body = models.TextField()
    created_at = models.DateTimeField(auto_now_add=True)
    updated_at = models.DateTimeField(auto_now=True)
    
    def __str__(self):
        return self.title

makemigrations migrate add to app/admin.py:

from .models import Modelname
admin.site.register(Modelname)

createsuperuser create dummy db entries delete test.py create init.py in appname/test create first assert test for db in test_models.py

from django.test import TestCase
from django.contrib.auth.models import User
from [APPNAME].models import Post


class BlogTests(TestCase):

    @classmethod
    def setUpTestData(cls):
        # Create a user
        testuser1 = User.objects.create_user(
        username='testuser1', password='abc123')
        testuser1.save()
        # Create a blog post
        test_post = Post.objects.create(
        author=testuser1, title='Blog title', body='Body content...')
        test_post.save()


    def test_blog_content(self):
        post = Post.objects.get(id=1)
        author = f'{post.author}'
        title = f'{post.title}'
        body = f'{post.body}'

        self.assertEqual(author, 'testuser1')
        self.assertEqual(title, 'Blog title')
        self.assertEqual(body, 'Body content...')

python manage.py test

pip install django-rest-framework add rest_framework to settings.py add this for a basic setup:

# urls.py
from django.contrib import admin
from django.urls import path

urlpatterns = [
    path('admin/', admin.site.urls),
    path('api/v1/', include('[APPNAME].urls')),
]

# appname/urls.py
from django.urls import path
from .views import UserList, UserDetail, PostList, PostDetail

urlpatterns = [
	path('users/', UserList.as_view()),
	path('users/<int:pk>/', UserDetail.as_view()),
	path('', PostList.as_view()),
	path('<int:pk>/', PostDetail.as_view()),
]

#views.py
from django.contrib.auth import get_user_model # new
from rest_framework import generics
from .models import Post
from .permissions import IsAuthorOrReadOnly
from .serializers import PostSerializer, UserSerializer

class PostList(generics.ListCreateAPIView):
	queryset = Post.objects.all()
	serializer_class = PostSerializer
	
class PostDetail(generics.RetrieveUpdateDestroyAPIView):
	permission_classes = (IsAuthorOrReadOnly,)
	queryset = Post.objects.all()
	serializer_class = PostSerializer
	
class UserList(generics.ListCreateAPIView): # new
	queryset = get_user_model().objects.all()
	serializer_class = UserSerializer
	
class UserDetail(generics.RetrieveUpdateDestroyAPIView): # new
	queryset = get_user_model().objects.all()
	serializer_class = UserSerializer

#serializers.py
from rest_framework import serializers
from django.contrib.auth import get_user_model
from .models import Post


class PostSerializer(serializers.ModelSerializer):
    class Meta:
        fields = ('id', 'author', 'title', 'body', 'created_at',)
        model = Post


class UserSerializer(serializers.ModelSerializer):
	class Meta:
		model = get_user_model()
		fields = ('id', 'username',)

continue in restframework

Ecomm tut

Link

Setup and install Django

create repo
clone repo
create virtualenv: virtualenv env
source env/bin/activate
pip install django
pip install django-rest-framework
pip install django-cors-headers
pip install djoser
pip install pillow < -- image library
pip install stripe
django-admin startproject djackets_django

add to setting.py -> installed apps:

'rest_framework',
'rest_framework.authtoken',
'corsheaders',
'djoser',

configure cors (add to settings.py):

CORS_ALLOWED_ORIGINS = [
	"http://localhost:8080",
]

configure middleware:

'corsheaders.middleware.CorsMiddleware',
#has to be above commonmiddleware

configure urls:

from django.urls import include
urlpatterns = [
...
path('api/v1/', include('djoser.urls')),
path('api/v1/', include('djoser.urls.authtoken')),

makemigrations - migrate - creatsuperuser - Done!

Install and setup vue

install vue on the pc if necessary:

npm install -g @vue/cli

start project:

vue create djackets_vue

Install packages:

cd djackets_vue
npm install axios <- make API calls
npm install bulma <- CSS framework

add FontAwesome to public/index.html:

<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.2/css/all.min.css">

replace style in app.vue with:

@import  '../node_modules/bulma';
and replace navbar according to vid @18:30

Create Django app

python manage.py startapp product

create models ...

Creating thumbnails of images:

To create thumbnails of images, you need to import the Pillow library, BytesIO and the File wrapper:

from django.core.files import File
from io import BytesIO
from PIL import Image

and add this function to the corresponding model:

def make_thumbnail(self, image, size=(300, 200)):
    img = Image.open(image)
    img.convert('RGB')
    img.thumbnail(size)
    thumb_io = BytesIO()
    img.save(thumb_io, 'JPEG', quality=85)
    thumbnail = File(thumb_io, name=image.name)
    return thumbnail

and add media to settings.py:

MEDIA_URL = '/media/'
MEDIA_ROOT = BASE_DIR / 'media'

and to urls.py:

from django.conf import settings
from django.conf.urls.static import static
...

urlpatterns = [
...
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)

creating the API view

from rest_framework.views import APIView
from rest_framework.response import Response

from .models import Product
from .serializers import ProductSerializer

class LatestProductsList(APIView):
    def get(self, request, format=None):
        products = Product.objects.all()[:4]
        serializer = ProductSerializer(products, many=True)
        return Response(serializer.data)

and add that view to products/urls.py (keep links for apps seperated):

from django.urls import path, include
from product import views

urlpatterns = [
    path('latest-products/' ,views.LatestProductsList.as_view())    
]

and don't forget to add a link to the apps urls into the projects urls.py:

urlpatterns = [
	...
    path('api/v1/', include('product.urls')),
] ...

Create Vue frontpage

Create the page and import the data with axios. For that:

  1. Create an empty data array
  2. create a method with an axios get request to consume the api
  3. create a mounted lifecycle hook to import the data
<script>
import axios from 'axios' //<- import the module
export default {
  name: 'Home',
  data() {
    return {
      latestProducts: [] //empty Array
    }
  },
  components: {
  }
  mounted() { //<-- Lifecycle hook calling the method when DOM is mounted
    this.getLatestProducts()
  },
  methods: {
    getLatestProducts() { //<- the axios get request
      axios
        .get('/api/v1/latest-products')
        .then(response => {
          this.latestProducts = response.data
        })
        .catch(error => {
          console.log(error)
        })
    }
  }
}
</script>

It won't work yet because the URL is unknown and axios needs to be imported into main.js:

import axios from 'axios'

axios.defaults.baseURL = 'http://127.0.0.1:8000'

createApp(App).use(store).use(router, axios).mount('#app')

Product detail page

Backend First, create the API endpoint for the product detail on the backend. views.py:

class ProductDetail(APIView):
    def get_object(self, category_slug, product_slug):
        try:
            return Product.objects.filter(category__slug=category_slug).get(slug=product_slug)
        except Product.DoesNotExist:
            raise Http404
    
    def get(self, category_slug, product_slug, format=None):
        product = self.get_object(category_slug, product_slug)
        serializer = ProductSerializer(product)

        return Response(serializer.data)

Frontend add product.vue to vews folder. Check template here and add the products page to the view router -> index.js:

import  Product  from  '../views/Product.vue'
const routes = [
...
 {
   path: '/:category_slug/:product_slug',
   name: 'Product',
   component: Product
 }
]

Slugs

The way the links in the URL work is like this: They start in the backend with the models get_absolute_url method. This function creates a string out of the category and the name and transforms it into slugs and into a link. The link i ś then shown on the front page which loops through every unique object. If the link is clicked, the slug gets taken in from the router and is then passed on as an argument in the detail view. The detail view calls the detail backend API using the slugs to identify the item to the backend. The backend delivers further details to the product view with an axios call.

  1. Backend function get_absolute_url in model to create a url to the db entry made out of slugs
  2. Give a list of all objects to the main page in Vue, loop through this list and create a card with a link containing the frontend url out of the slugs of each object
  3. If a link is clicked, the product page is displayed. Here, the vue router gives the slugs from the url to the vue-view. The view method calls the backend api using the same slugs to create the link.
  4. The backend API gives the details of that single object to the vue-view which uses it to create the product detail page.

State management with vue x

VueX Create the store functionality in store/index.js:

import { createStore } from 'vuex'

export default createStore({
  state: {
    cart: {
      items: [],
    },
    isAuthenticated: false,
    token: '',
    isLoading: false
  },
  mutations: {
    initializeStore(state) {
      if (localStorage.getItem('cart')) {
        state.cart = JSON.parse(localStorage.getItem('cart'))
      } else {
        localStorage.setItem('cart', JSON.stringify(state.cart))
      }
    },
    addToCart(state, item) {
      const exists = state.cart.items.filter(i => i.product.id === item.product.id)

      if (exists.length) {
        exists[0].quantity = parseInt(exists[0].quantity) + parseInt(item.quantity)
      } else {
        state.cart.items.push(item)
      }

      localStorage.setItem('cart', JSON.stringify(state.cart))
    }
  },
  actions: {
  },
  modules: {
  }
})

and add the store to the App.vue:

<script>
  export default{
    data (){
      return {
	...
        cart: {
          items: [],
        }
      }
    },
    beforeCreate() {
      this.$store.commit('initializeStore') //commit calls mutations defined in the store file
    }
    
  }
</script>

to make it possible to add an item to the cart, this needs to be implemented in the Product page. First, a button is added with the functionality:

...
	<div class="control">
	    <a class="button is-dark" @click="addToCart()">Add to cart</a>
	</div>
...


<script>
...
export default {
    name: 'Product',
    data() {
        return {
            product: {},
            quantity: 1
        }
    },
...
    methods: {
...
        addToCart() {
            if (isNaN(this.quantity) || this.quantity < 1) {
                this.quantity = 1
            }
            const item = {
                product: this.product,
                quantity: this.quantity
            }
            this.$store.commit('addToCart', item)
        }
    }
}
</script>

and make sure the cart is initialised as variable in app.vue:

<script>
...
    mounted() {
      this.cart = this.$store.state.cart
    },
    computed: {
      cartTotalLength() {
        let totalLength = 0

        for (let i = 0; i < this.cart.items.length; i++) {
          totalLength += this.cart.items[i].quantity
        }

        return totalLength 
      }
    }
    
  }
</script>

To give the add to cart button some responsibility, we will use a "toast" popup that shows that an item has been added. Bulma has such an extension that can be installed with:

$ npm install bulma-toast

and add the toast functionality to the product view:

<script>
...
import { toast } from 'bulma-toast'
export default {
	...
    methods: {
		...
            toast({
                message: 'The product was added to the cart',
                type: 'is-success',
                dismissible: true,
                pauseOnHover: true,
                duration: 2000,
                position: 'bottom-right',
            })
...
</script>

add a loading bar while FE communicates with BE server

in the store/index.js add a state isLoading and the functionailty to use it:

import { createStore } from 'vuex'

export default createStore({
  state: {
	...
    isLoading: false
  },
  mutations: {
	...
    setIsLoading(state, status) {
      state.isLoading = status
    }

and set that status to true as soon as soon as the product page requests the details from the server (views/products.vue). To make sure that loading false is not called before the axios call is actually finished, make sure to set the function to async and add await to the axios call:

<script>
...
export default {
...
    methods: {
        async getProduct() {
            this.$store.commit('setIsLoading', true)
		...await axios call
            this.$store.commit('setIsLoading', false)
        },

and add the loading symbol to the app.vue:

    <div class="is-loading-bar has-text-centered" :class="{'is-loading': $store.state.isLoading}">
      <div class="lds-dual-ring"></div>
    </div>

Create a category view

To create the category view we go the same way as for the detail view:

  • Create view file Category.vue
  • Within, create the template, the axios call etc.
  • Create the entry in the router/index.js

Important: If we now change from Winter to Summer, the page will not change as they have a similar url ('category-slug') so the mounted lifecycle hook will not be called. To make navigation between similiarly named, dynamically created routes possible there is an option called "watch". This has to be added to the "Category.vue":

<script>
...
export default {
	...
    watch: {
        $route(to, from) {
            if (to.name === 'Category') {
                this.getCategory()
            }
        }
    },

Add search function

Backend

First, we add an Endpoint in our backend for the search functionality. In our views.py we add:

from django.db.models import Q #Improves query functionality (& and , | or )
from rest_framework.decorators import api_view #the decorator to only use this function with a POST request
....
@api_view(['POST'])
def search(request):
    query = request.data.get('query', '')

    if query:
        products = Products.objects.filter(Q(name__icontains=query) | Q(description__icontains=query))
        serializer = ProductSerializer(products, many=True)
        return Response(serializer.data)
    else:
        return Response({'products': []})

and add the entry to the urls.py file:

urlpatterns = [
...
    path('products/search', views.search),
    ...
]

Frontend

add a searchbar to the navbar in App.vue:

<form method="get" action="/search">
  <div class="field has-addons">
    <div class="control">
      <input type="text" class="input" placeholder="What are you looking for?" name="query">
    </div>
    <div class="control">
      <button class="button is-success">
        <span class="icon">
          <i class="fas fa-search"></i>
        </span>
      </button>
    </div>
  </div>
</form>

and create a landing page for that site that displays the results:

<template>
... 
</template>

<script>
import axios from 'axios'
import ProductBox from '@/components/ProductBox.vue'

    export default {
        name: 'Search',
        components: {
            ProductBox
        },
        data() {
            return {
                products: [],
                query: '',
            }
        },
        mounted() {
            document.title = 'Search | Djackets '

            let uri = window.location.search.substring(1) // I don't get it, to look into
            let params = new URLSearchParams(uri) //don't get it neither

            if (params.get('query')) {
                this.query = params.get('query')

                this.performSearch()
            }
        },
        methods: {
            async performSearch () {
                this.$store.commit('setIsLoading', true)

                await axios
                    .post('/api/v1/products/search/', {'query': this.query})
                    .then(response => {
                        this.products = response.data
                    })
                    .catch(error => {
                        console.log(error)
                    })
                
                this.$store.commit('setIsLoading', false)
            }
        }
    }
</script>

add that page to the router:

import Search from '../views/Search.vue'

const routes = [
	...
  {
    path: '/search',
    name: 'Search',
    component: Search
  },
]

Done!

Add a cart

Kind of the same thing by now. Add a view and a router entry. In this instance, we want to display each item in a particular way so it makes sense to create a component for this too. The most important parts are the functionality sections of the site and the component. Cart.vue:

<template>
    <div class="page-cart">
        <div class="columns is-multiline">
            <div class="column is-12">
                <h1 class="title">Cart</h1>
            </div>

            <div class="column is-12 box">
                <table class="table is-fullwidth" v-if="cartTotalLength">
                    <thead>
                        <tr>
                            <th>Product</th>
                            <th>Price</th>
                            <th>Quantity</th>
                            <th>Total</th>
                            <th></th>
                        </tr>
                    </thead>

                    <tbody>
                        <CartItem
                            v-for="item in cart.items"
                            v-bind:key="item.product.id"
                            v-bind:initialItem="item"
                            v-on:removeFromCart="removeFromCart" />
                    </tbody>
                </table>

                <p v-else>You don't have any products in your cart...</p>
            </div>

            <div class="column is-12 box">
                <h2 class="subtitle">Summary</h2>

                <strong>${{ cartTotalPrice.toFixed(2) }}</strong>, {{ cartTotalLength }} items

                <hr>

                <router-link to="/cart/checkout" class="button is-dark">Proceed to checkout</router-link>
            </div>
        </div>
    </div>
</template>

<script>
import axios from 'axios'
import CartItem from '@/components/CartItem.vue'
export default {
    name: 'Cart',
    components: {
        CartItem
    },
    data() {
        return {
            cart: {
                items: []
            }
        }
    },
    mounted() {
        this.cart = this.$store.state.cart
    },
    methods: {
        removeFromCart(item) {
            this.cart.items = this.cart.items.filter(i => i.product.id !== item.product.id)
        }
    },
    computed: {
        cartTotalLength() {
            return this.cart.items.reduce((acc, curVal) => {
                return acc += curVal.quantity
            }, 0)
        },
        cartTotalPrice() {
            return this.cart.items.reduce((acc, curVal) => {
                return acc += curVal.product.price * curVal.quantity
            }, 0)
        },
    }
}
</script>

and the component CartItem.vue:

<template>
    <tr>
        <td><router-link :to="item.product.get_absolute_url">{{ item.product.name }}</router-link></td>
        <td>${{ item.product.price }}</td>
        <td>
            {{ item.quantity }}
            <a @click="decrementQuantity(item)">-</a>
            <a @click="incrementQuantity(item)">+</a>
        </td>
        <td>${{ getItemTotal(item).toFixed(2) }}</td>
        <td><button class="delete" @click="removeFromCart(item)"></button></td>
    </tr>
</template>

<script>
export default {
    name: 'CartItem',
    props: {
        initialItem: Object
    },
    data() {
        return {
            item: this.initialItem
        }
    },
    methods: {
        getItemTotal(item) {
            return item.quantity * item.product.price
        },
        decrementQuantity(item) {
            item.quantity -= 1
            if (item.quantity === 0) {
                this.$emit('removeFromCart', item)
            }
            this.updateCart()
        },
        incrementQuantity(item) {
            item.quantity += 1
            this.updateCart()
        },
        updateCart() {
            localStorage.setItem('cart', JSON.stringify(this.$store.state.cart))
        },
        removeFromCart(item) {
            this.$emit('removeFromCart', item)
            this.updateCart()
        },
    },
}
</script>

Done!

Signup

We only need to do the frontend part of this. The BE implementation is already done by importing djoser. Create a sign-up page on the frontend. This one has forms with simple validation and a connection to the backend:

<template>
    <div class="page-sign-up">
        <div class="columns">
            <div class="column is-4 is-offset-4">
                <h1 class="title">Sign up</h1>

                <form @submit.prevent="submitForm">
                    <div class="field">
                        <label>Username</label>
                        <div class="control">
                            <input type="text" class="input" v-model="username">
                        </div>
                    </div>

                    <div class="field">
                        <label>Password</label>
                        <div class="control">
                            <input type="password" class="input" v-model="password">
                        </div>
                    </div>

                    <div class="field">
                        <label>Repeat password</label>
                        <div class="control">
                            <input type="password" class="input" v-model="password2">
                        </div>
                    </div>

                    <div class="notification is-danger" v-if="errors.length">
                        <p v-for="error in errors" v-bind:key="error">{{ error }}</p>
                    </div>

                    <div class="field">
                        <div class="control">
                            <button class="button is-dark">Sign up</button>
                        </div>
                    </div>

                    <hr>

                    Or <router-link to="/log-in">click here</router-link> to log in!
                </form>
            </div>
        </div>
    </div>
</template>

<script>
import axios from 'axios'
import { toast } from 'bulma-toast'
export default {
    name: 'SignUp',
    data() {
        return {
            username: '',
            password: '',
            password2: '',
            errors: []
        }
    },
    methods: {
        submitForm() {
            this.errors = []
            if (this.username === '') {
                this.errors.push('The username is missing')
            }
            if (this.password === '') {
                this.errors.push('The password is too short')
            }
            if (this.password !== this.password2) {
                this.errors.push('The passwords don\'t match')
            }
            if (!this.errors.length) {
                const formData = {
                    username: this.username,
                    password: this.password
                }
                axios
                    .post("/api/v1/users/", formData)
                    .then(response => {
                        toast({
                            message: 'Account created, please log in!',
                            type: 'is-success',
                            dismissible: true,
                            pauseOnHover: true,
                            duration: 2000,
                            position: 'bottom-right',
                        })
                        this.$router.push('/log-in')
                    })
                    .catch(error => {
                        if (error.response) {
                            for (const property in error.response.data) {
                                this.errors.push(`${property}: ${error.response.data[property]}`)
                            }
                            console.log(JSON.stringify(error.response.data))
                        } else if (error.message) {
                            this.errors.push('Something went wrong. Please try again')
                            
                            console.log(JSON.stringify(error))
                        }
                    })
            }
        }
    }
}
</script>

the log in page will be almost identical. For the state management on the fron end side (logged in , logged out) we have to add an option to initializeStore to set the isAuthenticated state in the store/index.js file that checks if a session token is present. Also, we need a function to set and remove the token. store/index.js:

  mutations: {
    initializeStore(state) {
	...
      if (localStorage.getItem('token')) {
        state.token = localStorage.getItem('token')
        state.isAuthenticated = true
      } else {
        state.token = ''
        state.isAuthenticated = false
      }
    },
    ...
    setToken(state, token) {
      state.token = token
      state.isAuthenticated = true
    },
    removeToken(state) {
      state.token = ''
      state.isAuthenticated = false
    }

to make the token available to our axios calls in our views, we add this bit to our App.vue file:

      const token = this.$store.state.token

      if (token) {
        axios.defaults.headers.common['Authorization'] = 'Token ' + token
      } else {
        axios.defaults.headers.common['Authorization'] = ''
      }

Done. BE implementation is done through djoser

Myaccount

The myaccount page is a page where you can logout or see your order history. The page looks like this:

<template>
    <div class="page-my-account">
        <div class="columns is-multiline">
            <div class="column is-12">
                <h1 class="title">My account</h1>
            </div>

            <div class="column is-12">
                <button @click="logout()" class="button is-danger">Log out</button>
            </div>

            <hr>

            <div class="column is-12">
                <h2 class="subtitle">My orders</h2>

                <OrderSummary
                    v-for="order in orders"
                    v-bind:key="order.id"
                    v-bind:order="order" />
            </div>
        </div>
    </div>
</template>

<script>
import axios from 'axios'
import OrderSummary from '@/components/OrderSummary.vue'
export default {
    name: 'MyAccount',
    components: {
        OrderSummary
    },
    data() {
        return {
            orders: []
        }
    },
    mounted() {
        document.title = 'My account | Djackets'
        this.getMyOrders()
    },
    methods: {
        logout() {
            axios.defaults.headers.common["Authorization"] = ""
            localStorage.removeItem("token")
            localStorage.removeItem("username")
            localStorage.removeItem("userid")
            this.$store.commit('removeToken')
            this.$router.push('/')
        },
        async getMyOrders() {
            this.$store.commit('setIsLoading', true)
            await axios
                .get('/api/v1/orders/')
                .then(response => {
                    this.orders = response.data
                })
                .catch(error => {
                    console.log(error)
                })
            this.$store.commit('setIsLoading', false)
        }
    }
}
</script>

the important part here is the router implementation:

...
import store from '../store'
...
const routes = [
...
  {
    path: '/my-account',
    name: 'MyAccount',
    component: MyAccount,
    meta: {
      requireLogin: true
    }
  },
]

const router = createRouter({
  history: createWebHistory(process.env.BASE_URL),
  routes
})

router.beforeEach((to, from, next) => {
  if (to.matched.some(record => record.meta.requireLogin) && !store.state.isAuthenticated) {// if the page that is about to be accessed has requireLogin true and isAuthenticated is false ...
    next({ name: 'LogIn', query: { to: to.path } });//... you will be forwarded to the login page ...
  } else {
    next()//... if not you will be forwarded to the page that had been requested
  }
})

Add stripe payment

Setup a stripe account and add the api key to the BE. settings.py. Also, create a new app to manage payments (python manage.py startapp order):

STRIPE_SECRET_KEY = 'IANGOIEANGOIENGOIEANGOIIN'
...
INSTALLED_APPS = {
....
'order',
}

on the frontend side, don't forget to add the stripe import to the public/index.html file:

   <script src="https://js.stripe.com/v3/"></script>

use the secret key for the backend and the publishable key for the frontend

Fixtures

Create fixtures from dev db:

python manage.py dumpdata app.Modelname --indent 4 > app/fixtures/filename.json

How to deploy a Django project to Heroku

Testing

Deploy on Google App Engine

  • Create project in GC
  • Enable API: -- Cloud Logging API -- Compute Engine API -- Cloud SQL Admin API
  • create sql instance in GC project

Setup gcloud SDK and init project useful commands herefore:

$ gcloud config get-value project
$ gcloud projects list
$ gcloud config set project my-project-id

Setup Cloud SQL Auth proxy:

$ gcloud sql instances describe [YOUR_INSTANCE_NAME] // last part of connection name
$ wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
$ chmod +x cloud_sql_proxy
$ ./cloud_sql_proxy -instances [YOUR_INSTANCE_CONNECTION_NAME]=tcp:3306

This will connect the project in the sdk with the db, then the cloud sql proxy file is downloaded into the project folder and given the necessary rights and then the deamon is started to make the local version connect with the upstream db

Modify settings.py so it automatically detects if it is accessed online or locally:

import os
...
if os.getenv('GAE_APPLICATION', None):
    # Running on production App Engine, so connect to Google Cloud SQL using
    # the unix socket at /cloudsql/<your-cloudsql-connection string>
    DATABASES = {
        'default': {
            'ENGINE': 'django.db.backends.postgresql', #or postgresql, sqlite3, oracle
            'HOST': '/cloudsql/[YOUR-CONNECTION-NAME]',
            'USER': '[YOUR-USERNAME]',
            'PASSWORD': '[YOUR-PASSWORD]',
            'NAME': '[YOUR-DATABASE]',
        }
    }
else:
    # Running locally so connect to either a local MySQL instance or connect 
    # to Cloud SQL via the proxy.  To start the proxy via command line: 
    #    $ cloud_sql_proxy -instances=[INSTANCE_CONNECTION_NAME]=tcp:3306 
    # See https://cloud.google.com/sql/docs/mysql-connect-proxy
    DATABASES = {
        'default': {
            'ENGINE': 'django.db.backends.postgresql', #or postgresql, sqlite3, oracle
            'HOST': '127.0.0.1',
            'PORT': '3306',
            'NAME': '[YOUR-DATABASE]',
            'USER': '[YOUR-USERNAME]',
            'PASSWORD': '[YOUR-PASSWORD]',
        }
    }
$ pip Install psycopg2-binary

to make the connection from django to postgres You have now setup a new database connection for Django so don't forget to migrate and createsuperuser

app.yaml

runtime: python38

handlers:
# This configures Google App Engine to serve the files in the app's
# static directory.
- url: /static
  static_dir: static/
# This handler routes all requests not caught above to the main app. 
# It is required when static routes are defined, but can be omitted 
# (along with the entire handlers section) when there are no static 
# files defined.
- url: /.*
  script: auto

entrypoint: gunicorn -b :$PORT djackets_django.wsgi

main.py

from djackets_django.wsgi import application
# App Engine by default looks for a main.py file at the root of the app
# directory with a WSGI-compatible object called app.
# This file imports the WSGI-compatible object of the Django app,
# application from mysite/wsgi.py and renames it app so it is
# discoverable by App Engine without additional configuration.
# Alternatively, you can add a custom entrypoint field in your app.yaml:
# entrypoint: gunicorn -b :$PORT mysite.wsgi
app = application

settings (DB, static ...)

ALLOWED_HOSTS = ['django-test-311217.ew.r.appspot.com']
...
STATIC_URL = '/static/'
MEDIA_URL = '/media/'
MEDIA_ROOT = BASE_DIR / 'media'
STATIC_ROOT = 'static'

requirements.txt (gunicorn, psycopg2 ...)

gunicorn==20.1.0
psycopg2-binary==2.8.6 #if postgrsql is used
$ collectstatic
$ gcloud app deploy 
  • create virtualenvironment: $ virtualenv envprojectname
  • $ conda deactivate -> source envprojectname/bin/activate
  • pip install django
  • Create project: $ django-admin startproject projectname (use . after projectname to initialise in current folder)
  • Create app: $ python manage.py startapp appname
  • Register app in settings.py - > INSTALLED_APPS = ['appname.apps.AppnameConfig', ]
  • add app to urls.py
    from django.contrib import admin
    from django.urls import path, include
    
    urlpatterns = [
    	path('admin/', admin.site.urls),
    	path('', include('appname.urls')),
    ]
    
  • create boilerplate app -> urls.py
    from django.urls import path
    
    from . import views
    
    urlpatterns = [
        path('', views.index),
    ]
    
  • create boilerplate app -> views.py
    from django.http import HttpResponse
    
    
    def index(request):
        return HttpResponse("Hello, world")
    ```	
    
  • migrate
  • create admin: $ python manage.py createsuperuser
  • create models, views, templates, statics, fixtures etc.
  • register models in admin.py
  • set Time_Zone in settings.py
  • migrate with: $ python manage.py makemigrations and $ ... migrate

DOM

Sustainability / Scalability

Projects

This section is a collection of all the little coding projects projects I participate in.

Json automation for surf

Automate update of Surf json file with cron

crontab -l crontab -e

  1. Fetch Github parent repo - 02:00

  2. Api Handler - 02:30

  3. JSON Handler - 02:40

  4. push dist repo to github - 03:00

enter image description here

access mariadb

Settings to make raspberry pi MariaDB listen to connections

/etc/mysql/my.cnf:

  • skip-networking=0

  • skip-bind-address

view privileges:

  • select user, host from mysql.user where host <> 'localhost'

TraLELHo

Gitpod ready-to-code

Tasks

Next

  • Create virtualenv for Django
  • Put it in requirements.txt

Data Handling

  • Create map of current data structure
  • Evaluate necessary modifications to data structure
  • Choose Database - none, using po files, maybe db for po collection later
  • Create Database and implement updated data structure
  • Scrape data and fill database
  • Verify that the data has been correctly transferred and adapted

DevOps

  • See if current hosting is suited
  • Migrate DB to new host if necessary
  • Migrate Back End / Front End
  • Set up Server

Back-End

  • Decide on Back End - Django
  • Create back end structure
    • Virtualenv
    • Container
    • Apps
    • Routes
    • Models (will be legacy from imported data)
  • Connect BE to DB and check connection
  • Create simple template
  • Adjust admin page for easy maintenance

Front-End

  • Decide on Front End technology and if FE even necessary
  • Implement Front End to play nice with Back End (template)
  • Style template

Organizational

  • Decide on hosting
  • Handover
  • Decide on license

Web Technologies

Packet switching

In telecommunications, packet switching is a method of grouping data that is transmitted over a digital network into packets. Packets are made of a header and a payload. Data in the header is used by networking hardware to direct the packet to its destination, where the payload is extracted and used by application software. You can check your nodes along the way with either $ tracepath or $ traceroute6:

Tracepath

Read TTL: Time to live. Tells a package how many hops it is allowed to make.

If the hops are exceeded, the package gets sent back with details about the time it took and the IP of the last node it reached. Tracepath sends packages with increasing TTL, starting from 1 and collects the data that has been sent back.

Security

Vulnerabilities

https://developer.mozilla.org/en-US/docs/Learn/Server-side/First_steps/Website_security

Write summary

Summarize book

History of computing

Early computing

The earliest device used for computing was the Abacus which makes calculations easier by having beads in rows, each representing a different power of 10

Charles Babbage theorized the analytical engine, a mechanical computer that would not be restricted to one type of computation. Therefore he’s sometimes credited as the father of computing.

This inspired further scientists like Ada Lovelace which is considered the first programmer.

One of the first computers to use electrical parts and punch cards was Herman Hollerith's tabulating machine to help count the 1890 american census. The punched out hole allowed quicksilver to close an electrical circuit, actuating an electrical motor to turn the wheels. Hollerith would found his own company which would later become International business machines (IBM).

Electronic computers

Harvard mark 2

One of the biggest electromechanical computers ever was the Harvard Mark 2, mainly used for computations in the Manhattan project. It used mechanical relays and 765.000 components. It used a 50 ft shaft powered by a 5 hp engine to synchronize all parts.

Because of the physical mass of the relay arm, the speed of it is limited, with the quickest relays doing about 50 switches per second. The Harvard Mark 2 was also the first computer to have a “bug”, a dead moth that jammed one of the mechanical parts.

Mark 1

Vacuum tube

One of the first vacuum tubes was called the thermionic valve. It houses 2 electrodes in a vacuum tube and used thermionic emission. One electrode could be heated and would emit electrodes which would be attracted by the other electrode but only if it was positively charged. In this case, a current would flow from one electrode to the other.

In 1906, Lee de Forest would add a third electrode which would act as a switch between the two transmitting / receiving pair. This Triode vacuum tube would become hugely succesful as it would be the first electrical switch without moving mechanical parts.

Electrical computers

This marked the switch from electromechanical to electrical computing. The colossus mark 1 was the first fully functional and programmable electrical computer containing 1600 vacuum tubes. 10 of them were built to decipher Nazi code during the second world war.

This was a massive improvement but the types of calculations were still limited. The first general purpose, programmable, electronic computer would then be the ENIAC at the university of Pennsylvania.

Colossus mark I

ENIAC

The transistor

In 1947 it was invented at Bell laboratories by three US physicists: John Bardeen (1908–1991), Walter Brattain (1902–1987), and William Shockley (1910–1989). It used semiconductors and was able to switch more than 10.000 times per second. Mainly, transistors were sturdy, not using fragile materials like glass and could therefore almost immediately be miniaturized. In 1957, the first fully transistor powered, commercially available computer came out, containing around 3000 transistors.

Logic levels

Read A majority of systems we use rely on either 3.3V or 5 V TTL Levels. TTL is an acronym for Transistor-Transistor Logic. It relies on circuits built from bipolar transistors to achieve switching and maintain logic states. Transistors are basically fancy-speak for electrically controlled switches. For any logic family, there are a number of threshold voltage levels to know. Below is an example for standard 5V/3,3V TTL levels:

VOH -- Minimum OUTPUT Voltage level a TTL device will provide for a HIGH signal (2,7 V/2,4V)

VIH -- Minimum INPUT Voltage level to be considered a HIGH (2 V/2V)

VOL -- Maximum OUTPUT Voltage level a device will provide for a LOW signal (0,4 V/0,5V)

VIL -- Maximum INPUT Voltage level to still be considered a LOW (0,8 V/0,8)

Therefore, every voltage in between 0,8 V and 2 V is considered invalid. This also shows that 5V and 3,3 V TTLs are compatible, though the higher voltage may damage the 3,3V parts so a voltage divider might be necessary.

Data Storage

Read

ARPANET

The first wide area, packet switching network using the TCP/IP protocol. Established by ARPA of the US Department of Defense. Initiated in 1966 and deemed operational in 1975.

P

OS

commands

RTFM

$ man man - manual page about how to use manual pages

$ man -f /command/ - shows all entries for command, if there are multiple, the first manpage will be opened by default. Open other sections by putting thee number before the command $ man 2 mkdir

$ man -k /keyword/ - searches for a keyword inside the man pages

Loops

You can write for loops in bash. Example:

for x in {0..9}; do file ./-file0$x; done

The syntax is:

for VARIABLE in N; do command to $Variable; done

random stuff

xargs will perform an operation to every argument coming from the pipe operator. For example:

find . | xargs file

"find ." will return every file in the current directory (.), the pipe operator passes that command to xargs and xargs performs the file operation on every entry

ctrl + r - reverse search (look through history of commands)

history - see former commands

cp (file) (destination) - copy files to destination, uses regex for files (*.jpg, [cf]at.exe …)

mv - move (same syntax as cp)

rm - remove (-f force, -i -r (dangerous!))

rmdir - remove directory

man (command) - show manual of command (whatis works too)

alias name = “command” - give extra name to command

unalias name - removes that alias

~/.bashrc is the default file to save aliases permanently. This file checks for the existence of .bash_aliases and if it exists will load those instead. If you want to chain or pipe commands in an alias you have to create a function within the alias. with $1 you can also pass parameters. I added this shortcut to quickly upload to github:

alias gitupload='_gitupload(){ git add . ; git commit . -m "$1" ; git push; }; _gitupload'

cat - can do a lot of things

$ ln -s filename linkname - creates shortcut (softlink) to filename

$ ln filename hardlinkname - creates hardlink, a linked copy of the file, it has the same inode number and is therefore identical, changes on one change both, however, one can be deleted uniquely, reducing the files link count. An inode only gets deleted when the link count is zero and the og file gets deleted

pstree -p - shows tree of running processes

$ strace /command/ - In the simplest case strace runs the specified command until it exits. It intercepts and records the system calls which are called by a process and the signals which are received by a process.

$ lsmod - lists currently loaded kernel modules

echo Hi > xyz.txt - creates xyz and writes Hi into, if exists it overwrites the data

echo Hi >> xyz.txt - creates xyz and writes Hi into, if exists it appends the data

< - redirect stdin (cat < peanuts.txt > banana.txt writes peanuts into banana)

2> - writes contents of error stream into destination (0 stdin, 1 stdout, 2 stderr)

ls /fake/directory > peanuts.txt 2>&1 - writes stderr and stdout into peanuts.txt

ls /fake/directory &> peanuts.txt - (same as line before)

tail -f /var/log/syslog - tail shows last lines of file, -f is follow flag means in this context that it monitors the end of the file, syslog is the system log file, with this command we can create a terminal that tracks all system acitvity

env | grep -i user - grep searches files for patterns, i is the insensitive flag, this command takes the ouptut of the environment variables and searches it for the word user, grep accepts regular expressions

$ dd if=/home/pete/backup.img of=/dev/sdb bs=1024M count=2 - copies 2048Mbyte of backup.img to /dev/sdb

list devices

$ lsusb - Listing USB Devices

$ lspci - Listing PCI Devices

$ lsscsi - Listing SCSI Devices

Ownership and permissions

ls -l /etc/shadow - shows permissions of shadow file (where passwords are stored) ( r - read, w - write, e - execute, s - SUID Set user id allows to inherit user permission from a program

$ chmod 755 myfile - changes permissions to myfile to read(4)/write(2)/execute(1)(=7) for active user - read execute (5) for current group and read execute for everyone else, a leading 4 (4755) adds user Suid permission (s), adds group SGID 2755 (s in group field), 1 adds sticky (only owner and root can delete 1755)(t))

sudo chmod u+s myfile - adds Suid permission for user to myfile (SGID is the group equivalent g+s)

$ sudo chown patty myfile - changes owner of myfile to patty

sudo chgrp whales myfile - changes owning group to whales

$ sudo chown patty:whales myfile - changes owner of myfile to patty and group to whales

$ umask 021 - takes away permissions, none from user, write from group and execute from others

watch -n 1 "ps aux | grep passwd" - not completely sure what it does yet, monitors UID’s?

Processes

ps aux- creates and shows snapshot of processes (-a all processes, -u shows more details, -x shows processes without tty (controlling terminal) aka deamon processes)

top - monitors running processes

flags

-a all (good with show files (dir or ls))

-r recursive (repeat until not possible to repeat anymore)

-i interactive (give prompts for example when overwriting files)

-l long (gives more details for example in the ps command)

Background

& - at the end of command runs process is Background ( to send a running process to the background, stop it with ctrl + z and enter $ bg)

$ fg JOBID to return the job with JOBID to the foreground, if no JOBID is given, the last job started will return to the foreground

UID

There’s three types of UID’s, effective, real and saved.

Effective is the UID of the process owner, used for example if an application with SUID rights is executed, effective UID is the UID of the user that launched the process. If no process with SUID permissions is launched, these UID’s are the same, there is also the saved UID which allows a process to swtich between effective and real

Processes

PPID - Parent process id - when a process is called, an existing process is cloned and run as a child (With its own Process ID). The original, now parent process has an PPID. PPID 1 is the init process that handles the os. It is the first process created by the kernel when the system boots up and can only be terminated by a system shutdown

_exit is the system call to terminate a process. After it is invoked, the system will wait for the exit status (mostly 0 for success) and then wait until the process is terminated and the space is free again. If the parent process terminates before the child, the child process is now an orphan process that will be handled by init (the mother of all processes)

When a child process terminates without the wait having been called on the parent, this process becomes a zombie. It does not use ressources anymore but it’s still on the process table and as space there is limited, zombies should be avoided (a few are ok). Zombies terminate after the parent process called wait (if there’s no parent, it’s init)

Processes communicate using Signals

kill PID - Sends terminate signal to given process id (find with $ps | grep NAME)

kill -9 PID - the actual kill command. kills process without wait or cleanup

Only one process can use the cpu at once. This is called a time slice. The kernel manages when each process can use the cpu. Processes can tell the kernel how much they need the cpu with a value called niceness. A high niceness value means the process let’s less nice processes use the cpu before. The value can be negative.

with the $ nice command the niceness of a new process can be set, $ renice sets the niceness of an existing process

Everything in linux is a file and so are processes. Process information is stored in /proc

Find out more about the status of a process with cat /proc/(PID)/status. This is how the kernel sees the system.

Packages

$ gzip file - zips file into archive file.gz

$ gunzip file.gz - reverses zip

$ tar cvf archive.tar file1 file2 - tar can zip multiple files the flags are (create, verbose, filename (filename comes after this flag, archive here))

$ tar xvf archive.tar - x is the extract flag

A common technique is to bundle files with tar and then zip them into an archive.tar.gz. Tar lets you use the z flag to automatically zip or unzip an archive

$ tar czf myfile.tar.gz

$ tar xzf file.tar

$ dpkg X filename - debian package manager, X can be i for install, r for remove or l for list

$ apt install - install package

$ apt remove - remove package

$ apt update - update the package repository

$ apt upgrade - upgrade installed packages if possible

$ apt show packagename - show information about packagename

Compile from sourcecode

follow this guide (!Attention - use checkinstall )

partitions

$ sudo parted -l - shows system partitions

$ sudo parted - starts the command line partition program (gparted for GUI)

(parted) select /dev/sda1 - selects device named /dev/sda1

(parted) print - shows partition table of selected device

(parted) mkpart [part-type name fs-type] start end - creates a primary partition (MBR) from start point 123 to end point 456

$ fsck /name/ - filesystem check, make sure /name/ is unmounted

$ df -h - show info about mounted devices

$ df -i - show info about inodes

$ ls -li - shows inode numbers of current directory

$ du -h - shows disk usage of current folder

$ sudo blkid | grep “UUID”- show UUID of mounted devices

devices mounted at startup are collected in /etc/fstab (filesystem table) show with $ cat /etc/ fstab, explanation of values here

Terminology

Process - A process is an instance of a running program, defined by allocated memory, cpu and I/O

Master Boot Record MBR - Used to be standard partition table (BIOS), can have a max of four primary partitions, if more are needed 1 single extended partition can be created and inside the extended partition, there can be logical partitions limited by the alphabet for windows and to 63 for linux.

GPT - (GUID(Global unique id) Partition Table) new standard (UEFI instead of BIOS)

Screenshot of Laptop file system (sudo parted -l):

Inodes - Index nodes, the inode table is the database of the filesystem, collecting all information about a file including the location in the memory, more info here

Kernel modules - are modules that get added to the kernel. they can be loaded during runtime or added to modprobe to run them during startup. See here

udev - a deamon that dynamically creates and removes device files according to the rules in etc/udev/rules.d the devices can be found in the dev folder and can be checked with the $ udevadm info --query=all --name=/dev/sda command

Git

Squash rebase etc.

here and here

git rebase --root -i

or

git rebase -i HEAD~(number of commits)

and to overwrite upstream

git push -f

create branch and push upstream

git checkout -b *branchname*
git push origin *branchname*

To-sort

git init /make a directory a git

git add /add to staging

git commit -m "comment" /commit staged files, -m skips editor and writs comment directly --amend(alter most recent commit, add files to stage if you want to change/include them)

git diff (show uncommitted changes)

git status

git log -w --oneline -p /shows history -w (whitespace) -oneline summarizes (works also for show) -p (patch) shows details, --graph shows branches neatly, --all shows all branches)

git show

git diff (shows difference of uncommitted files)

git tag -a ….. ,,, (-a=annotated (more info), … tag identifier, ,,,, SHA identifier (w/o it tags most recent), -d = delete,-D override delete)

git branch name (creates branch name, w/o name command lists branches, -d flag deletes branch)

git checkout name (switch to branch name, creates new branch with -b flag)

git merge name (merges the name branch)

git revert SHA (reverse something a commit has done by creating a new commit reversing it)

git reset (deletes commits and their content, HEAD~1 or HEAD^ resets last commit to working directory (--soft toStage --hard to trash))

git reflog (undeletes git reset commands)

Globbing lets you use special characters to match patterns/characters. In the .gitignore file, you can use the following:

  • blank lines can be used for spacing

  • # - marks line as a comment

      • matches 0 or more characters
  • ? - matches 1 character

  • [abc] - matches a, b, or c

  • ** - matches nested directories - a/**/z matches

  • a/z

  • a/b/z

  • a/b/c/z

VSCode

Shortcuts

Ctrl + Shift + P - Show all commands Ctrl + P - Go to file F5 - Debugger Ctrl + Shift + ` - Terminal

Auto Entrepreneur

Développeur web auto-entrepreneur : les informations clés

Le Centre de Formalités des Entreprises est généralement l'URSSAF

Le code APE est généralement : 62.10Z - Programmation informatique

Le plafond de chiffre d'affaires à ne pas dépasser est de : 72 500 €

Rémunération mensuelle : 500 à 650 € / jour

Le montant des cotisations sociales à payer est de : 22 % de votre CA

Legal

Fair use

from janefriedman.com enter image description here

Licenses

The two main categories of open source licenses often require in-depth explanation. Open source licenses can be divided into two main categories: copyleft and permissive. This division is based on the requirements and restrictions the license places on users.

Copyleft vs. permissive

Copyleft

  • Allows to use, modify and share
  • Copyleft license has to be maintained

Permissive

  • "Anything goes"
  • permits proprietary derivative works

Copyleft licenses

GPL - Gnu general public license

The most popular open source license, created by Richard Stallman to protect GNU from becoming proprietary. More here.

Sources

  • https://resources.whitesourcesoftware.com/blog-whitesource/open-source-licenses-explained