Blog

  • tplink

    Visit original content creator repository
    https://github.com/danesparza/tplink

  • pli

    pli

    create CLI for any project within a few minutes with Pli!

    Installation and usage

    pli-terminal-gif

    Global installation is recommended

    npm install -g @dawiidio/pli 
    # or 
    yarn global add @dawiidio/pli
    npm install @dawiidio/pli 
    # or
    yarn add @dawiidio/pli 

    You can use it also via npx

    npx @dawiidio/pli

    Usage

    init pli in current directory

    # by default pli init will produce templates directory with sample template file
    pli init
    
    # config file is optional, but if you want to create
    # more complex templates it may be useful
    # to generate it run
    pli init -c
    
    # by default pli init will produce typescript config file and examples, if you prefer js use
    pli init -c -t js

    the above command creates templates directory and sample template file in it, which looks like this:

    export function hello() {
        return 'Hello $NAME$';
    }

    as you can see we have $NAME$ which defines pli’s variable. This variable will be extracted and prompted to fill with value after selecting template, you can now run pli command in current directory, pli will prompt you with below message:

    ? Select template (Use arrow keys)
    ❯ hello.js 
    

    select template hello.js by pressing enter

    ? Select template hello.js
    ? Output directory  <-- type directory where file should be saved or leave it empty to save in current
    ? Insert value for NAME :  <-- type value for name, e.g. David
    

    when last variable is filled with value pli starts its magic and produces result file, after successful creation process you will see summary like below:

    Following structure was created inside directory /your/current/working/directory
    ├─ hello.js
    

    That’s it! You can see the results by opening file. For example

    cat hello.js

    should return

    export function hello() {
        return 'Hello David';
    }

    cli

    pli run
    
    runs cli, default command
    
    Commands:
      pli run   runs cli, default command                                  [default]
      pli init  initializes pli in current directory
    
    Options:
          --help                Show help                                  [boolean]
          --version             Show version number                        [boolean]
      -c, --config              path to config file                         [string]
      -d, --dry                 dry run, results will not be saved         [boolean]
      -l, --logLevel            log level. Available options: error, warn, info,
                                debug. Multiple options separated by pipe sign "|"
                                                         [string] [default: "error"]
      -o, --allowOverwriting    allow overwriting output files while committing data
                                to storage                                 [boolean]
      -t, --templatesDirectory  override templates directory
                                                    [boolean] [default: "templates"]
    
    

    Examples

    The above example is just the simplest one, you can create more sophisticated templates with many directories, variables and files. See examples for more

    Config file

    to create more powerful tools and templates the config file may be needed, run

    # for typescript config file run
    pli init -c
    # for javascript config file run
    pli init -c -t js

    the above command creates pli.config.js or pli.config.ts file in current directory, now this is the time to create more complex templates, we will create a React component template with support for css modules.

    run

    mkdir templates/\\$NAME$
    touch templates/\\$NAME$/\\$NAME$.tsx 
    touch templates/\\$NAME$/\\$NAME$.module.css

    in templates/$NAME$/$NAME$.tsx file add

    import React, { FunctionComponent } from "react";
    import styles from './$NAME$.module.css';
    
    interface $NAME$Props {
    
    }
    
    export const $NAME$:FunctionComponent<$NAME$Props> = ({  }) => {
    
        return (
            <div className={styles.$NAME$Root}>
                Component $NAME$
            </div>
        )
    };

    now we have a template for React component, but we want to have support for css modules, so we need to add css file for it.

    in templates/$NAME$/$NAME$.module.css file add

    .$NAME$Root {
    
    }

    now we have a template files for React component with css module support, and it will work just fine now, but we can make it even better.

    in pli.config.ts file add

    import { Template, IConfig, TemplateVariable, ITemplateVariable, IVariableScope } from '@dawiidio/pli';
    
    const config: IConfig = {
        templates: [
            new Template({
                // readable name, instead of "$NAME$" you will see "React Component" in cli
                name: 'React Component', 
                // if you want to extend from existing template in templates directory you need to provide its name
                id: '$NAME$',
                // all will be generated relative to src/components directory
                defaultOutputDirectoryPath: 'src/components',
                variables: [
                    new TemplateVariable({
                        // variable name, it will be replaced with value in template files
                        name: 'NAME',
                        // you can pass default value for our variable
                        defaultValue: 'MyComponent',
                        // let's add some validation for our variable
                        validate: (value: string) => {
                            if (value.length < 3) {
                                throw new Error('Name must be at least 3 characters long');
                            }
                        }
                    }),
                    new TemplateVariable({
                        // variable name, it will be replaced with value in template files
                        name: 'DIRNAME',
                        // this variable will subscribe from NAME variable, so it will be updated when NAME is updated
                        defaultValue: '$NAME$',
                        ui: {
                            // you can also hide variables from user, so it will be used only for internal purposes
                            hidden: true
                        }
                    }).pipe(
                        // you can pipe variable value and trnasform it as you want, 
                        // in this case we will replace all spaces with dashes
                        // and then we will convert all letters to lowercase
                        // so if we type "My Component" as NAME variable value
                        // DIRNAME will be "my-component"
                        (value: string, variable: ITemplateVariable, scope: IVariableScope) => value.replace(/\s/g, '-').toLowerCase()
                    )
                ],
            })
        ]
    }
    
    export default config;

    after adding config file we can run pli, if you set $NAME$ to e.g. TestFilters you will see below message:

    Following structure was created inside directory /myProject/src/components
    ├─ TestFilters/
    │  ├─ TestFilters.module.css
    │  ├─ TestFilters.tsx
    
    

    Variables

    You can create variables in your templates by using notation with dollars $MY_VAR$. Variable name is case-sensitive, so $my_var$ and $MY_VAR$ are different variables. Variable name can contain only letters, numbers and underscores.

    Variables can be used in any file, or directory name, or in other variable defaultValue field which means that variable will subscribe to changes of variables passed in defaultValue. You can use variables also in outputMapping in template config.

    Scopes

    Variables are organised in scopes, so you can have variables with the same name in different scopes. It is useful when you want to access variables from different template. For example if you want to add template as an entry to another template you can use variables from parent template i child. Also, variables from child will be extracted and prompted to fill with value when selecting parent template.

    Example:

    import { Template, IConfig, TemplateVariable } from '@dawiidio/pli';
    
    const childTemplate = new Template({
        name: 'Child',
        id: 'child.ts',
        variables: [
            new TemplateVariable({
                name: 'CHILD_VAR',
                defaultValue: 'child'
            })
        ]
    });
    
    // parent template will prompt for PARENT_VAR and CHILD_VAR
    const parentTemplate = new Template({
        name: 'Parent',
        id: 'parent.ts',
        variables: [
            new TemplateVariable({
                name: 'PARENT_VAR',
                defaultValue: 'parent'
            }),
        ],
        entries: [
            childTemplate
        ]
    });
    
    const config: IConfig = {
        templates: [
            parentTemplate,
        ]
    }
    
    export default config;

    Output mapping

    You can map output of template which allows you to create more complex templates, for example you can create template which will remap child template output to different directory or filename.

    import { Template, IConfig, TemplateVariable } from '@dawiidio/pli';
    
    const childTemplate = new Template({
        name: 'Child',
        id: 'child.ts',
        variables: [
            new TemplateVariable({
                name: 'CHILD_VAR',
                defaultValue: 'child'
            })
        ]
    });
    
    const parentTemplate = new Template({
        name: 'Parent',
        id: 'parent.ts',
        variables: [
            new TemplateVariable({
                name: 'PARENT_VAR',
                defaultValue: 'parent'
            }),
        ],
        entries: [
            childTemplate
        ],
        outputMapping: {
            // note thath you can not use CHILD_VAR in this scope
            'child.ts': '$PARENT_VAR$.ts',
            'parent.ts': '$PARENT_VAR$_somePostfix.ts',
        }
    });
    
    const config: IConfig = {
        templates: [
            parentTemplate,
        ]
    }
    
    export default config;
    Visit original content creator repository https://github.com/dawiidio/pli
  • Deep-Stream-Cam

    Deep-Stream-Cam

    **Deep-Stream-Cam** enables seamless real-time face swapping and video deepfakes with just a single image and a single click. Leverage AI to effortlessly transform video content in an instant and explore advanced visual effects.

    hacksider%2FDeep-Live-Cam | Trendshift

    Demo GIF

    Disclaimer

    This software aims to enhance the AI-generated media industry, supporting tasks like animating custom characters or creating digital models.

    Important Notes:

    • The software has built-in checks to prevent processing inappropriate media (e.g., nudity, war footage).
    • Users must respect ethical guidelines and obtain consent when using real people’s faces.
    • The software may be modified or shut down if legally required.

    Quick Start – Pre-built Versions

    For Windows / NVIDIA GPU:

    For Mac / Apple Silicon:

    These pre-built versions are ideal for non-technical users who want a quick setup. Manual installation is also available for advanced users.

    TLDR: Live Deepfake in 3 Easy Steps

    1. Select a face: Choose the face to swap.
    2. Choose your camera: Select the camera source.
    3. Press Live!: See the magic in real-time!

    Features & Uses – Real-time Face Swapping

    1. Mouth Mask

    • Retain original mouth movements for natural realism.
    • Mouth Mask

    2. Face Mapping

    • Swap faces across multiple subjects in a scene.
    • Face Mapping

    3. Movie Mode

    • Replace faces in movies in real-time.
    • Movie Mode

    4. Live Shows

    • Perfect for live performances or streaming.
    • Live Show

    5. Memes

    • Create viral memes with ease.
    • Meme Creation
    • Created using Deep-Stream-Cam

    Installation (Manual Setup)

    Note: Installation requires technical skills. For easier use, consider the pre-built versions.

    Click for Installation Instructions

    Requirements

    Steps to Install

    1. Clone the Repository
    git clone https://github.com/hacksider/Deep-Live-Cam.git
    cd Deep-Live-Cam
    1. Download Models

      Place these files in the models folder.

    2. Install Dependencies

    pip install -r requirements.txt
    1. Run the Application
    python run.py

    GPU Acceleration (Optional)

    For enhanced performance, use GPU acceleration for processing:

    CUDA (NVIDIA GPU)

    1. Install CUDA Toolkit.
    2. Install dependencies:
    pip uninstall onnxruntime onnxruntime-gpu
    pip install onnxruntime-gpu==1.16.3
    1. Run the application:
    python run.py --execution-provider cuda

    CoreML (Apple Silicon)

    1. Install dependencies:
    pip uninstall onnxruntime onnxruntime-silicon
    pip install onnxruntime-silicon==1.13.1
    1. Run:
    python run.py --execution-provider coreml

    Usage

    Image/Video Mode

    1. Run python run.py.
    2. Select source and target images/videos.
    3. Click “Start” to process.

    Webcam Mode

    1. Run python run.py.
    2. Choose source face image.
    3. Click “Live” to start webcam mode and preview.
    4. Stream with OBS or any screen capture tool.

    Command-Line Arguments

    Here’s a quick overview of the available command-line options:

    • -s SOURCE_PATH: Select a source image.
    • -t TARGET_PATH: Choose a target image or video.
    • --mouth-mask: Use the mouth mask feature.
    • --live-mirror: Live camera mirror.
    • -v, --version: Show version info.

    For full list, refer to the CLI documentation.

    Press Coverage

    Deep-Stream-Cam has garnered attention for its groundbreaking AI-powered face-swapping capabilities:

    Credits

    Thanks to the following contributors:

    • ffmpeg: for video processing.
    • deepinsight: for face detection models.
    • havok2-htwo: for webcam integration.
    • Special thanks to all contributors for their dedication and support.

    Visit original content creator repository https://github.com/ENGINEER-MUHAMMAD-SHAHZAIB/Deep-Stream-Cam
  • revdefine

    Rev Define (revdefine)

    RChain BlockExplorer

    This is just a front-end project which only display the data in the RChain. There are no mock data to demonstrate. If you really want to run the explorer with real data, you have to start the custom RNode and connect to it.

    The custom RNode is not an official RNode version developed by the whole core team!!!!Bugs and problems are expected.

    Prerequirements

    You need to install the software below to compile.

    1. docker

    Build in docker

    Because a lot of the js libraries are out of dated in revdefine, in order to compile it well for everyone,
    here presents a docker way to make sure compile would work for different nodejs environments.

    $ docker run --rm -it --entrypoint /bin/bash -v $(pwd):/revdefine ubuntu:18.04
    dockerBash $ cd /revdefine
    dockerBash $ bash docker-build.sh

    After the command above , you would find the static files in dist.

    Start the app in development mode (hot-code reloading, error reporting, etc.)

    $ quasar dev

    Then you can browse the page by http://localhost:8080

    Build the app for production

    quasar build

    The generated static files would locate in dist.

    config RNode server

    The default host which the explorer is trying to connect is localhost.

    You can edit the hosts by editting productionHost.ts

    Currently revdefine.io is supporting the data providing.You can also use revdefine api.

    export const productionHost = 'https://revdefine.io'
    export const productionRNodePort = 40406

    Visit original content creator repository
    https://github.com/RevDefine/revdefine

  • svs-paediatric-delerium

    svs-paediatric-delerium

    Summary

    This is the final year project for the SvS team. Currently the project consists of a prototype Audit system which will display compliance data for delerium treatment within PICUs across the UK & ROI.

    The problem that required this system is that paediatric ICUs had no modern and convenient way to record data regarding patients experiencing delirium. The approach taken was one of modernisation and simplification to assist the ICUs. This was done using software engineering techniques to aid in development and create a system that improved on the previous one. This work will greatly aid the nurses in paediatric ICUs in their process of recording the Audit data.

    Technical Requirements

    Only Docker is specifically required to run the project, but Postgres and Node.Js is required if the application is to be ran without the use of Docker.

    How to start the project

    A start-up script is used to start the application. There is both a .ps1 version, for window machines, and a .sh, for linux machines.

    To allow powershell scripts to run on you machine, please follow the instructions within the PowerShell set up section.

    Please see the commands below to start the application

    Windows

    ./start-docker.ps1

    Powershell set up

    If you wish to use the powershell script you may need to run the following command with administrator access, within the powershell application.

    Set-ExecutionPolicy RemoteSigned

    Linux

    ./start-docker.sh

    Script Arguments

    The will delete the existing database volume for this project before starting the project to allow the database to be re-initialised.

    The following arguments can be supplied to each script

    • -b
      • Executes in the background
    • -c
      • Deletes all containers and volumes related to the document
    • -n
      • Deletes all project images along with containers and volumes
    • -p
      • Starts the production environment, if in a Linux environment and the .sh script is used this also sets cronjobs for the backup along with the rolling delete of the log data.

    Backup (only on linux)

    • -d
      • Creates a dump of the database
    • -startcron
      • Begins a crontab daemon that will dump the database at midnight everyday
    • -stopcron
      • Stops the crontab daemon

    Restore (only on linux)

    • -r c
      • Restores the child dump.
    • -r f
      • Restores the father dump.
    • -r g
      • Restores the grandfather dump.

    Using Docker

    The docker daemon must be running.

    Development Mode

    docker-compose -f docker-compose.yml -f docker-compose.dev.yml up

    Production Mode

    docker-compose -f docker-compose.yml -f docker-compose.prod.yml up

    Running Without Docker

    The backend needs to have a postgres sever with the.

    More about the frontend and the backend can be found in their respective README.md‘s.

    Initialising Database

    If the database-set-up.sh file is causing errors, when starting the docker container, stop the container and run the commands below, within a bash terminal. Once this is done, then attempt to run the container again. See the stackoverflow answer here.

    vi database-set-up.sh
    :set ff=unix
    :wq

    If you would like to change the setup of the database alter the .sql files within the relevant database folder within the sql folder.

    File Structure

    • Docker: These files are stored within the main directory of the project, which includes:
      • The docker-compose files for both the development and production environments
      • The required database-set-up.sh script which is used to build the database
        • This script requires the existance of the ./sql folder, and it’s subfolders
    • PgAmin4: The relevant files to set up the Postgres server with PgAdmin4 are within the ./pgadmin4 folder and more can be learnt here
    • Startup Scripts: Both these scripts reside in the main directory
    • Backend: Stored within the ./APIs folder
      • A specific README for the backend can be found at ./APIs/README.MD
      • The main code of the project can be found in ./APIs/src
      • The automated tests can be found in ./APIs/tests
      • When the commands npm install & npm run docs is executed within the ./APIs directory:
        • The site can be viewed by opening ./APIs/docs/site/index.html in a web browser
        • Markdown files can be viewed within the folder ./APIs/docs/markdown
    • Frontend: Stored within the ./Audit folder
      • A specific README for the backend can be found at ./Audit/README.MD
      • When the command npm install & npm run docs is executed within the ./Audit directory:
        • The site can be viewed by opening ./Audit/docs/site/index.html in a web browser
        • Markdown files can be viewed within the folder ./Audit/docs/markdown
      • Custom components made for the application can be found at ./Audit/src/components
      • The pages of the site can be found in ./Audit/src/pages
      • The entry point of the applications is the ./Audit/src/index.tsx files
      • The routing can be found in ./Audit/src/AppRouter.tsx
      • Images, used for the site can be found in ./Audit/src/assets/images
    • Database:
      • The .sql files that build the database belong in the sql file
        • Each subfolder represents a separate database and these folders contain the .sql files

    Accessing Database Server

    CLI

    To access the database once the container is running use the below command where {dev|prod} depends if the development or production environment is running.

    docker exec -it {dev|prod}_svs_postgres psql -U postgres

    PgAdmin4

    The instructions here were followed to set up pgAdmin4. This is only available within the development environment.

    Access

    The instructions below must be repeated every time the project is launched.

    1. Go to http://localhost:5050/, the page may take a while to load

    2. Enter in the following details

    3. Select the server named dev-svs-postgres-server and enter the password postgrespw

    Alternatively to step 3, you can add the server manually following the steps below:

    1. Select ‘Add New Server’

    2. Enter any name you would like for the server to be called

    3. Select the ‘Connection’ tab and enter the below details, the default values for the other fields mentioned below should be left

      • Host name/address = svs_postgres
      • Username = postgres
      • Password = postgrespw

    View Tables

    • Within the side panel go to serverName -> Databases -> databaseName -> Schemas -> public -> Tables
      • Replace ‘severName’ with the name given to the server in step 4 within the Access section
      • Replace ‘databaseName’ with the specific name of the database in which the tables are contained in
    • To view the data within the table right click the table name and select ‘View/Edit Data’

    Convert Backend to use HTTPS

    Within the ./APIs/src/index.ts, at the bottom of the file.

    Ensure the below code is uncommented:

    https.createServer(options, app)
    .listen(port, () => {
      console.log(`listen port ${port}`);
      console.log(`Go to https://${baseIP}:${port}/swagger-docs for documentation`);
    });

    Ensure the below code is commented out:

    app.listen(port,()=> {
      console.log(`listen port ${port}`);
      console.log(`Go to http://${baseIP}:${port}/swagger-docs for documentation`);
    });

    Ensure your own ./APIs/server.cert and ./APIs/server.key is used.

    Removing Caesar Cipher

    As traffic is encrypted, there is no practical need for the Caesar cipher used, although the application will still work with this.

    If you would like to remove this it can be done by replacing the ./login endpoint definition with the below:

    app.post("/login", (request: Request, response: Response, next:NextFunction) => authenticate);

    Then the configuration, when calling the ./login endpoint within ./Audit/src/pages/SignIn/SignIn.tsx can be changed to the below:

    const loginConfig = {
      method: "post",
      url: `${process.env.REACT_APP_API_URL}/login`, 
      data: {
        username: username,
        password: password,
      }
    };

    The ceaserCipherr method can then be removed from this file, if so wished.

    Visit original content creator repository
    https://github.com/adamlogan17/svs-paediatric-delerium

  • self-reasoning-tokens-pytorch

    Self Reasoning Tokens – Pytorch (wip)

    Exploration into the proposed Self Reasoning Tokens by Felipe Bonetto. The blog post seems a bit unfleshed out, but the idea of stop gradients from next token(s) is an interesting one.

    My initial thought was to apply a stop gradient mask on the attention matrix, but then realized that the values of the “reasoning” tokens could not be stop gradiented correctly without memory issues.

    While walking the dog and meditating on this, I came to the realization that one can create independent stop gradient masks for queries, keys, values in either flash attention or a custom attention backwards, and there may be a whole array of possibilities there. If any experiments come back positive from this exploration, will build out a concrete implementation of this.

    Install

    $ pip install self-reasoning-tokens-pytorch

    Usage

    import torch
    from self_reasoning_tokens_pytorch import Transformer
    
    model = Transformer(
        dim = 512,
        depth = 4,
        num_tokens = 256,
        stop_grad_next_tokens_to_reason = True
    )
    
    x = torch.randint(0, 256, (1, 4))
    
    loss = model(
        x,
        num_reason_tokens = 4,                # number of reasoning tokens per time step
        num_steps_future_can_use_reason = 16, # say you wish for reason tokens to be only attended to by tokens 16 time steps into the future
        return_loss = True
    )
    
    loss.backward()
    
    logits = model(x, num_reason_tokens = 4)

    Or use the novel attention with ability to pass specific stop gradient masks for queries, keys, values

    import torch
    from self_reasoning_tokens_pytorch import stop_graddable_attn
    
    q = torch.randn(2, 8, 1024, 64)
    k = torch.randn(2, 8, 1024, 64)
    v = torch.randn(2, 8, 1024, 64)
    
    stop_grad_mask = torch.randint(0, 2, (8, 1024, 1024)).bool()
    
    out = stop_graddable_attn(
        q, k, v, causal = True,
        q_stop_grad_mask = stop_grad_mask,
        k_stop_grad_mask = stop_grad_mask,
        v_stop_grad_mask = stop_grad_mask
    )
    
    out.shape # (2, 8, 1024, 64)

    The mask should look something like

    Todo

    • deviating from blog post, also try optimizing only a subset of attention heads by tokens far into the future

    Citations

    @misc{Bonetto2024,
        author  = {Felipe Bonetto},
        url     = {https://reasoning-tokens.ghost.io/reasoning-tokens/}
    }
    Visit original content creator repository https://github.com/lucidrains/self-reasoning-tokens-pytorch
  • Docs

    Personal Docs

    Introduction

    Welcome to my personal documentation repository! This repo is a collection of the notes, guides, troubleshooting logs, and resources I use in my daily development. It includes everything from setting up full-stack applications to solving common errors.

    The aim is not to create a general-purpose documentation that helps everyone, but rather something I can refer to when working on the same technologies or concepts in the future. This will provide verified code snippets and the correct way of doing things, avoiding trial and error, saving time, and increasing my efficiency in writing repeatable code. This allows me to focus on learning new things.

    Note: This documentation does not contain everything I have learned. At times of urgency, I have not and may not document certain concepts or snippets. Hopefully, I will add them someday soon.

    Folder Structure

    Below is an overview of the folder structure and what each section contains:

    /development

    This section contains detailed guides and documentation organised by various topics:

    • setup: Setup of frontend, backend using typescript, react-native , etc tech stacks also of packages like mkdocs or tools like obsidian and many more.
    • snippets: Working code snippets of the various packages or tools, etc.
    • miscellaneous: Things that i could not classify are added to this section.
    • deployment: Docs related to deployment like creating VM or hosting site on VM, using ngnix, etc.

    /error-solutions

    A collection of error logs and their corresponding solutions:

    • development-errors: Errors faced during development like cors or handling data, deployment etc.
    • cpp-errors: cpp language errors that I usually find during dsa or cp.

    /general

    Documentation specific to general things like involving OS or something

    • ubuntu: Problems related to ubuntu.

    /learning

    A section dedicated to learning resources and materials, docs that I created while learning:

    • docker
    • git-github
    • mongodb
    • react
    • react-native
    • typescript
    • UI-Libraries-Frontend-Styling

    /legacy-docs

    These are the docs that I created earlier, does contains some useful notes still but are not very well written or not classified correctly.

    How to Use

    Feel free to browse through the directories to find the specific documentation, error and solution, or resources that you need. Each folder is organized to make it easy to locate the relevant information.

    Contribution

    This repository is primarily for my personal use, but if you find something useful and would like to contribute or suggest improvements, feel free to create an issue or submit a pull request or directly mail me.

    Visit original content creator repository
    https://github.com/XoXoHarsh/Docs

  • binary-tree-zigzag-level-order-traversal

    Binary Tree Zigzag Level Order Traversal

    Given a binary tree, return the zigzag level order traversal of its nodes’ values. (ie, from left to right, then right to left for the next level and alternate between).

    For example:
    Given binary tree [3,9,20,null,null,15,7],
        3
       / \
      9  20
        /  \
       15   7
    return its zigzag level order traversal as:
    [
      [3],
      [20,9],
      [15,7]
    ]
    

    Implementation :

    /**
     * Definition for a binary tree node.
     * public class TreeNode {
     *     int val;
     *     TreeNode left;
     *     TreeNode right;
     *     TreeNode(int x) { val = x; }
     * }
     */
    class Solution {
        public List<List<Integer>> zigzagLevelOrder(TreeNode root) {
            List<List<Integer>> list = new ArrayList<>();
            if(root == null)
                return list;
            Queue<TreeNode> queue = new LinkedList<>();
            queue.offer(root);
            int currentLevel = 0;
            while(!queue.isEmpty()){
                currentLevel++;
                List<Integer> level = new ArrayList<>();
                int size = queue.size();
                for(int i = 0; i < size; i++){
                    TreeNode current = queue.poll();
                    level.add(current.val);
                    if(current.left != null)
                        queue.offer(current.left);
                    if(current.right != null)
                        queue.offer(current.right);
                }
                if(currentLevel % 2 == 0)
                    Collections.reverse(level);
                list.add(level);
                
            }
           return list; 
        }
    }

    References :

    https://www.youtube.com/watch?v=smjr2ow6oKc (Alternate approach using two stacks)

    Visit original content creator repository
    https://github.com/eMahtab/binary-tree-zigzag-level-order-traversal

  • garmin-workouts

    Garmin Connect Workouts Tools

    CI codecov

    Command line tools for managing Garmin Connect workouts.

    Features:

    • Target power is set according to Your current FTP.
    • All workouts under Your control stored as JSON files.
    • Easy to understand workout format, see examples below.
    • Workout parts like warm-up or cool-down are reusable.
    • Schedule saved workouts
    • The most important parameters (TSS, IF, NP) embedded in workout description field.

    Installation

    Requirements:

    • Python 3.x (doc)

    Clone this repo:

    git clone https://github.com/mkuthan/garmin-workouts.git

    Use the venv command to create a virtual copy of the entire Python installation.:

    cd garmin-workouts
    python3 -m venv venv

    Set your shell to use the venv paths for Python by activating the virtual environment:

    source venv/bin/activate

    Install dependencies:

    pip3 install -r requirements.txt

    Usage

    First call to Garmin Connect takes some time to authenticate user. Once user is authenticated cookie jar is created with session cookies for further calls. It is required due to strict request limits for Garmin SSO service.

    Authentication

    Define Garmin connect account credentials as GARMIN_USERNAME and GARMIN_PASSWORD environment variables:

    export GARMIN_USERNAME=username
    export GARMIN_PASSWORD=password

    Alternatively use -u and -p command line arguments:

    python -m garminworkouts -u [USERNAME] -p [PASSWORD]

    Import Workouts

    Import workouts into Garmin Connect from definitions in YAML files. If the workout already exists it will be updated:

    python -m garminworkouts import --ftp [YOUR_FTP] 'sample_workouts/*.yaml'

    Sample workout definition:

    name: "Boring as hell but simple workout"
    
    steps:
      - { power: 50, duration: "10:00" }
      - { power: 70, duration: "20:00" }
      - { duration: "5:00" }
      - { power: 70, duration: "20:00" }
      - { power: 50 }
    • Target power is defined as percent of FTP (provided as mandatory command line parameter). If the target power is not specified “No target” will be used for the workout step.
    • Target power may be defined as absolute value like: “150W”, it could be useful in FTP ramp tests.
    • Duration is defined as HH:MM:SS (or MM:SS, or SS) format. If the duration is not specified “Lap Button Press” will be used to move into next workout step.

    Reusing workout definitions:

    name: "Boring as hell but simple workout"
    
    steps:
      - !include inc/warmup.yaml
      - { power: 70, duration: "20:00" }
      - { duration: "5:00" }
      - { power: 70, duration: "20:00" }
      - !include inc/cooldown.yaml
    • !include is a custom YAML directive for including another file as a part of the workout.

    Reusing workout steps:

    name: "Boring as hell but simple workout"
    
    steps:
      - !include inc/warmup.yaml
      - &INTERVAL { power: 70, duration: "20:00" }
      - { duration: "5:00" }
      - *INTERVAL
      - !include inc/cooldown.yaml
    • Thanks to YAML aliases, workout steps can be easily reused once defined.

    Sample Over-Under workout:

    name: "OverUnder 3x9"
    
    steps:
      - !include inc/warmup.yaml
      - &INTERVAL
        - &UNDER { power: 95, duration: "2:00" }
        - &OVER { power: 105, duration: "1:00" }
        - *UNDER
        - *OVER
        - *UNDER
        - *OVER
        - { power: 50, duration: "3:00" }
      - *INTERVAL
      - *INTERVAL
      - !include inc/cooldown.yaml
    • All nested sections are mapped as repeated steps in Garmin Connect. First repeat for warmup, second repeat for main interval (repeated 3 times) and the last one for cool down.

    To import your workout from an xlsx file, construct a table in Excel that looks like this (making sure that all Excel cells are set to text and not to date or any other format):

    Start End Duration
    43 85 3:00
    85 15:00
    85 43 2:00

    If your “start” and “end” power for a step differ, a ramp of 10 seconds steps will be created by default for the chosen duration. If more than 50 total steps are to be uploaded ramp’s steps will get longer so that the total number of steps is under Garmin maximum value of 50. TIPS Do not use your TACX without the power cable as your Garmin will have a hard time controlling the trainer while changing from one step to the next. Turn off the tones in your Garmin. If you wish to give your values in W instead of % of your FTP:

    Start End Duration
    80W 160W 3:00
    160W 15:00
    160W 80W 2:00

    You can then import as with the yaml files:

    python -m garminworkouts import --ftp [YOUR_FTP] my.workout.xlsx

    This will generate a yaml file with the name my.workout.xlsx. The name of the workout will be “my.workout”.

    Export Workouts

    Export all workouts from Garmin Connect into local directory as FIT files. This is the easiest way to synchronize all workouts with Garmin device:

    python -m garminworkouts export /mnt/GARMIN/NewFiles

    List Workouts

    Print summary for all workouts (workout identifier, workout name and description):

    $ python -m garminworkouts list
    188952654 VO2MAX 5x4           FTP 214, TSS 80, NP 205, IF 0.96
    188952362 TEMPO 3x15           FTP 214, TSS 68, NP 172, IF 0.81
    188952359 SS 3x12              FTP 214, TSS 65, NP 178, IF 0.83
    188952356 VO2MAX 5x3           FTP 214, TSS 63, NP 202, IF 0.95
    188952357 OU 3x9               FTP 214, TSS 62, NP 188, IF 0.88
    188952354 SS 4x9               FTP 214, TSS 65, NP 178, IF 0.83
    188952350 TEMPO 3x10           FTP 214, TSS 49, NP 169, IF 0.79
    188952351 TEMPO 3x12           FTP 214, TSS 57, NP 171, IF 0.80
    188952349 OU 3x6               FTP 214, TSS 47, NP 181, IF 0.85
    188952348 SS 6x6               FTP 214, TSS 65, NP 178, IF 0.83
    127739603 FTP RAMP             FTP 214, TSS 62, NP 230, IF 1.08

    Get Workout

    Print full workout definition (as JSON):

    $ python -m garminworkouts get --id [WORKOUT_ID]
    {"workoutId": 188952654, "ownerId": 2043461, "workoutName": "VO2MAX 5x4", "description": "FTP 214, TSS 80, NP 205, IF 0.96", "updatedDate": "2020-02-11T14:37:56.0", ...

    Delete Workout

    Permanently delete workout from Garmin Connect:

    python -m garminworkouts delete --id [WORKOUT_ID]

    Schedule Workouts

    Schedule preexisting workouts using the workout number (e.g. “https://connect.garmin.com/modern/workout/234567894“) The workout number is the last digits of the URL here: 234567894 Note: the date format is as follows : 2021-12-31

    python -m garminworkouts schedule -d [DATE] -w [WORKOUT_ID]
    Visit original content creator repository https://github.com/mkuthan/garmin-workouts
  • tokyo

    tokyo

    greed2411

    When you hit rock-bottom, you still have a way to go until the abyss.- Tokyo, Netflix’s “Money Heist” (La Casa De Papel)



    image belongs to teepublic

    When one is limited by the technology of the time, One resorts to Java APIs using Clojure.

    This is my first attempt on Clojure to have a REST API which when uploaded a file, identifies it’s mime-type, extension and text if present inside the file and returns information as JSON. This works for several type of files. Including the ones which require OCR, thanks to Tesseract. Complete list of supported file formats by Tika.

    Uses ring for Clojure HTTP server abstraction, jetty for actual HTTP server, pantomime for a clojure abstraction over Apache Tika and also optionally served using traefik acting as reverse-proxy.

    Installation

    Two options:

    1. Download openjdk-11 and install lein. Followed by lein uberjar
    2. Use the Dockerfile (Recommended)

    Building

    1. You can obtain the .jar file from releases (if it’s available).
    2. Else build the docker image using Dockerfile.
    docker build ./ -t tokyo
    docker run tokyo:latest
    

    Note: the server defaults to running on port 80, because it has been exposed in the docker image. You can change the port number by setting an enviornment variable TOKYO_PORT inside the Dockerfile, or in your shell prompt to whichever port number you’d like when running the .jar file.

    I’ve also added a docker-compose.yml which uses traefik as reverse proxy. use docker-compose up.

    Usage

    1. the /file route. make a POST request by uploading a file.

      • the command line approach using curl
      curl -XPOST  "http://localhost:80/file" -F file=@/path/to/file/sample.doc
      
      {"mime-type":"application/msword","ext":".bin","text":"Lorem ipsum \nLorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc ac faucibus odio."}
      >>> import requests
      >>> import json
      
      >>> url = "http://localhost:80/file"
      >>> files = {"file": open("/path/to/file/sample.doc")}
      >>> response = requests.post(url, files=files)
      >>> json.loads(response.content)
      
      {'mime-type': 'application/msword', 'ext': '.bin', 'text': 'Lorem ipsum \nLorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc ac faucibus odio.'}

      the general API response,json-schema is of the form:

      :mime-type (string) - the mime-type of the file. eg: application/msword, text/plain etc.
      :ext       (string) - the extension of the file. eg: .txt, .jpg etc.
      :text      (string) - the text content of the file.
      

    Note: The files being uploaded are stored as temp files, in /tmp and removed after an hour later. (assuming the jvm is still running for that hour or so).

    1. just a /, GET request returns Hello World as plain text. to act as ping.

    If going down the path of using docker-compose. The request gets altered to

    curl -XPOST  -H Host:tokyo.localhost http://localhost/file -F file=@/path/to/file/sample.doc
    
    {"mime-type":"application/msword","ext":".bin","text":"Lorem ipsum \nLorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc ac faucibus odio."}

    and

    >>> response = requests.post(url, files=files, headers={"Host": "tokyo.localhost"})

    where tokyo.localhost has been mentioned in docker-compose.yml

    Why?

    I had to do this because neither Python’s filetype (doesn’t identify .doc, .docx, plain text), textract (hacky way of extracting text, and one needs to know the extension before extracting) are as good as Tika. The Go version, filetype didn’t support a way to extract text. So I resorted to spiraling down the path of using Java’s Apache Tika using the Clojure pantomime library.

    License

    Copyright © 2020 greed2411/tokyo

    This program and the accompanying materials are made available under the terms of the Eclipse Public License 2.0 which is available at http://www.eclipse.org/legal/epl-2.0.

    This Source Code may also be made available under the following Secondary Licenses when the conditions for such availability set forth in the Eclipse Public License, v. 2.0 are satisfied: GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version, with the GNU Classpath Exception which is available at https://www.gnu.org/software/classpath/license.html.

    Visit original content creator repository https://github.com/greed2411/tokyo