Working with Binary Data in Python DevDungeon

nginx reverse proxy configuration settings?

Hey all,
After recently working through my nginx reverse proxy configuration, I noticed mine, while working as expected, could be structured much cleaner than it currently is.
So I'm curious about two things
  1. How others have structured their nginx.conf, sites-enabled/default, conf.d/jellyfin.conf. and any other config files they may have. It seems the best practice is to define each area within its own config file. For example, http headers configured in conf.d/http_headers.conf and included in nginx.conf
  2. What specific settings do others use for both security and performance for jellyfin - obviously the jellyfin docs have nginx settings listed, but curious what others do beyond these.
For context, I run a local static website along with proxying to jellyfin and I'm sure I could be doing things better than I currently am.
Here's my nginx.conf for example:
## ================================= ## to test configuration for errors ## run: gixy /etc/nginx.conf ## ================================= user www-data; worker_processes auto; pid /run/nginx.pid; include /etc/nginx/modules-enabled/*.conf; events { worker_connections 1024; multi_accept on; } http { charset utf-8; sendfile on; tcp_nopush on; tcp_nodelay on; server_tokens off; log_not_found off; types_hash_max_size 2048; # size Limits & Buffer Overflows client_body_buffer_size 128K; client_header_buffer_size 16k; client_max_body_size 32M; large_client_header_buffers 4 16k; # timeouts client_body_timeout 10; client_header_timeout 10; keepalive_timeout 5 5; send_timeout 10; server_names_hash_bucket_size 128; server_name_in_redirect off; # MIME include /etc/nginx/mime.types; default_type application/octet-stream; # logging access_log /valog/nginx/access.log; error_log /valog/nginx/error.log; # Diffie-Hellman parameter for DHE ciphersuites ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # SSL Settings ssl_session_cache shared:le_nginx_SSL:10m; ssl_session_timeout 1d; ssl_session_tickets off; ssl_prefer_server_ciphers on; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384; # OCSP Stapling ssl_stapling on; ssl_stapling_verify on; resolver 8.8.8.8 8.8.4.4 valid=60s; resolver_timeout 5s; # virtual Host Configs include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; # gzip Settings gzip on; gzip_http_version 1.1; gzip_vary on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_proxied any; gzip_comp_level 1; gzip_min_length 10240; gzip_buffers 16 8k; # what gzip will compress gzip_types text/plain text/css text/xml application/json application/javascript application/rss+xml application/atom+xml image/svg+xml; } 
jellyfin.conf:
server { listen 80; listen [::]:80; server_name $webAddress; set $jellyfin 192.168.20.203; # only domain name requests allowed if ($host !~ ^($webAddress)$ ) { return 444; } # only get,head,post requests allowed if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } # Redirect to HTTPS if ($host = $webAddress) { return 302 https://$server_name$request_uri; } return 404; } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name $webProxyAddress; set $jellyfin 192.168.20.203; # if they come here using HTTP, bounce them to the correct scheme error_page 497 https://$server_name:$server_port$request_uri; # only domain name requests allowed if ($host !~ ^($webProxyAddress)$ ) { return 444; } # only get,head,post requests allowed if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } # block download agents if ($http_user_agent ~* LWP::Simple|BBBike|wget) { return 403; } # SSL certs ssl_certificate ...; ssl_certificate_key ...; ssl_trusted_certificate ...; # HTTP security headers -- JELLY DOC add_header X-Frame-Options "SAMEORIGIN"; add_header X-XSS-Protection "1; mode=block"; add_header X-Content-Type-Options "nosniff"; add_header Content-Security-Policy "default-src https: data: blob:; style-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-inline' https://www.gstatic.com/cv/js/sendev1/cast_sender.js; worker-src 'self' blob:; connect-src 'self'; object-src 'none'; frame-ancestors 'self'"; # HTTP security headers -- added for A+ rating add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"; add_header Referrer-Policy 'strict-origin'; add_header Expect-CT 'enforce, max-age=3600'; add_header Feature-Policy "autoplay 'none'; camera 'none'"; add_header Permissions-Policy 'autoplay=(); camera=()'; add_header X-Permitted-Cross-Domain-Policies none; # password security auth_basic "Restricted Content"; auth_basic_user_file /etc/nginx/.htpasswd; # proxy Jellyfin - copied fron jellyfin docs location / { proxy_pass http://$jellyfin:8096; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Protocol $scheme; proxy_set_header X-Forwarded-Host $http_host; # Disable buffering proxy gets very resource heavy proxy_buffering off; } # location block for Jellyfin /web - copied from jellyfin docs # purely for aesthetics location ~ ^/web/$ { proxy_pass http://$jellyfin:8096/web/index.html; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Protocol $scheme; proxy_set_header X-Forwarded-Host $http_host; } # websocket Jellyfin - copied from jellyfin docs location /socket { proxy_pass http://$jellyfin:8096; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-Protocol $scheme; proxy_set_header X-Forwarded-Host $http_host; } } 
default
# set access rate limit: only allow 4 requests per second limit_req_zone $binary_remote_addr zone=one:10m rate=4s; # caching map map $sent_http_content_type $expires { default off; text/html epoch; text/css 5m; application/javascript 5m; ~image/ 5m; } server { listen 80 default_server; listen [::]:80 default_server; server_name $webAddress; # only get,head,post request allowed if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } # only domain name requests allowed if ($host !~ ^($webAddress)$ ) { return 444; } # redirect to HTTPS if ($host = $webAddress) { return 301 https://$host$request_uri; } return 404; } server { listen [::]:443 ssl http2; listen 443 ssl http2; server_name $webAddress; root /vawww/html; index index.html; # if they come here using HTTP, bounce them to the correct scheme error_page 497 https://$server_name:$server_port$request_uri; # redirect errors to 404 page error_page 401 403 404 /404.html; # set 503 error page error_page 503 /503.html; # only domain name requests allowed if ($host !~ ^($webAddress)$ ) { return 444; } # only get,head,post requests allowed if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } # block download agents if ($http_user_agent ~* LWP::Simple|BBBike|wget) { return 403; } # block some robots if ($http_user_agent ~* msnbot|scrapbot) { return 403; } # caching map expiration expires $expires; # cache location ~* /.(jpg|jpeg|png|gif|ico|pdf|png|ico|woff2|woff)$ { expires 5m; } # prevent deep linking location /img/ { valid_referers blocked $webAddress; if ($invalid_referer) { return 403; } referer_hash_bucket_size 128; } # SSL certs ssl_certificate ...; ssl_certificate_key ...; ssl_trusted_certificate ...; # HTTP security headers -- A+ rating add_header X-Frame-Options "SAMEORIGIN"; add_header X-XSS-Protection "1; mode=block"; add_header X-Content-Type-Options "nosniff"; add_header Content-Security-Policy "base-uri 'self'; default-src 'none'; frame-ancestors 'none'; style-src 'self'; font-src 'self' https://fonts.gstatic.com; img-src 'self'; script-src 'self' http https; form-action 'self'; require-trusted-types-for 'script'"; add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"; add_header Referrer-Policy 'strict-origin'; add_header Expect-CT 'enforce, max-age=3600'; add_header Feature-Policy "autoplay 'none'; camera 'none'"; add_header X-Permitted-Cross-Domain-Policies none; add_header Permissions-Policy 'autoplay=(); camera=()'; location /nginx_status { stub_status on; access_log off; # restrict access to lan allow 192.168.1.0/24; deny all; # security auth_basic "Restricted Content"; auth_basic_user_file /etc/nginx/.htpasswd; } location / { try_files $uri $uri/ =404; # rate limit limit_req zone=one burst=10 nodelay; } } 

submitted by famesjranko to jellyfin [link] [comments]

Beginner's critiques of Rust

Hey all. I've been a Java/C#/Python dev for a number of years. I noticed Rust topping the StackOverflow most loved language list earlier this year, and I've been hearing good things about Rust's memory model and "free" concurrency for awhile. When it recently came time to rewrite one of my projects as a small webservice, it seemed like the perfect time to learn Rust.
I've been at this for about a month and so far I'm not understanding the love at all. I haven't spent this much time fighting a language in awhile. I'll keep the frustration to myself, but I do have a number of critiques I wouldn't mind discussing. Perhaps my perspective as a beginner will be helpful to someone. Hopefully someone else has faced some of the same issues and can explain why the language is still worthwhile.
Fwiw - I'm going to make a lot of comparisons to the languages I'm comfortable with. I'm not attempting to make a value comparison of the languages themselves, but simply comparing workflows I like with workflows I find frustrating or counterintuitive.
Docs
When I have a question about a language feature in C# or Python, I go look at the official language documentation. Python in particular does a really nice job of breaking down what a class is designed to do and how to do it. Rust's standard docs are little more than Javadocs with extremely minimal examples. There are more examples in the Rust Book, but these too are super simplified. Anything more significant requires research on third-party sites like StackOverflow, and Rust is too new to have a lot of content there yet.
It took me a week and a half of fighting the borrow checker to realize that HashMap.get_mut() was not the correct way to get and modify a map entry whose value was a non-primitive object. Nothing in the official docs suggested this, and I was actually on the verge of quitting the language over this until someone linked Tour of Rust, which did have a useful map example, in a Reddit comment. (If any other poor soul stumbles across this - you need HashMap.entry().or_insert(), and you modify the resulting entry in place using *my_entry.value = whatever. The borrow checker doesn't allow getting the entry, modifying it, and putting it back in the map.)
Pit of Success/Failure
C# has the concept of a pit of success: the most natural thing to do should be the correct thing to do. It should be easy to succeed and hard to fail.
Rust takes the opposite approach: every natural thing to do is a landmine. Option.unwrap() can and will terminate my program. String.len() sets me up for a crash when I try to do character processing because what I actually want is String.chars.count(). HashMap.get_mut() is only viable if I know ahead of time that the entry I want is already in the map, because HashMap.get_mut().unwrap_or() is a snake pit and simply calling get_mut() is apparently enough for the borrow checker to think the map is mutated, so reinserting the map entry afterward causes a borrow error. If-else statements aren't idiomatic. Neither is return.
Language philosophy
Python has the saying "we're all adults here." Nothing is truly private and devs are expected to be competent enough to know what they should and shouldn't modify. It's possible to monkey patch (overwrite) pretty much anything, including standard functions. The sky's the limit.
C# has visibility modifiers and the concept of sealing classes to prevent further extension or modification. You can get away with a lot of stuff using inheritance or even extension methods to tack on functionality to existing classes, but if the original dev wanted something to be private, it's (almost) guaranteed to be. (Reflection is still a thing, it's just understood to be dangerous territory a la Python's monkey patching.) This is pretty much "we're all professionals here"; I'm trusted to do my job but I'm not trusted with the keys to the nukes.
Rust doesn't let me so much as reference a variable twice in the same method. This is the functional equivalent of being put in a straitjacket because I can't be trusted to not hurt myself. It also means I can't do anything.
The borrow checker
This thing is legendary. I don't understand how it's smart enough to theoretically track data usage across threads, yet dumb enough to complain about variables which are only modified inside a single method. Worse still, it likes to complain about variables which aren't even modified.
Here's a fun example. I do the same assignment twice (in a real-world context, there are operations that don't matter in between.) This is apparently illegal unless Rust can move the value on the right-hand side of the assignment, even though the second assignment is technically a no-op.
//let Demo be any struct that doesn't implement Copy. let mut demo_object: Option = None; let demo_object_2: Demo = Demo::new(1, 2, 3); demo_object = Some(demo_object_2); demo_object = Some(demo_object_2); 
Querying an Option's inner value via .unwrap and querying it again via .is_none is also illegal, because .unwrap seems to move the value even if no mutations take place and the variable is immutable:
let demo_collection: Vec = Vec::::new(); let demo_object: Option = None; for collection_item in demo_collection { if demo_object.is_none() { } if collection_item.value1 > demo_object.unwrap().value1 { } } 
And of course, the HashMap example I mentioned earlier, in which calling get_mut apparently counts as mutating the map, regardless of whether the map contains the key being queried or not:
let mut demo_collection: HashMap = HashMap::::new(); demo_collection.insert(1, Demo::new(1, 2, 3)); let mut demo_entry = demo_collection.get_mut(&57); let mut demo_value: &mut Demo; //we can't call .get_mut.unwrap_or, because we can't construct the default //value in-place. We'd have to return a reference to the newly constructed //default value, which would become invalid immediately. Instead we get to //do things the long way. let mut default_value: Demo = Demo::new(2, 4, 6); if demo_entry.is_some() { demo_value = demo_entry.unwrap(); } else { demo_value = &mut default_value; } demo_collection.insert(1, *demo_value); 
None of this code is especially remarkable or dangerous, but the borrow checker seems absolutely determined to save me from myself. In a lot of cases, I end up writing code which is a lot more verbose than the equivalent Python or C# just trying to work around the borrow checker.
This is rather tongue-in-cheek, because I understand the borrow checker is integral to what makes Rust tick, but I think I'd enjoy this language a lot more without it.
Exceptions
I can't emphasize this one enough, because it's terrifying. The language flat up encourages terminating the program in the event of some unexpected error happening, forcing me to predict every possible execution path ahead of time. There is no forgiveness in the form of try-catch. The best I get is Option or Result, and nobody is required to use them. This puts me at the mercy of every single crate developer for every single crate I'm forced to use. If even one of them decides a specific input should cause a panic, I have to sit and watch my program crash.
Something like this came up in a Python program I was working on a few days ago - a web-facing third-party library didn't handle a web-related exception and it bubbled up to my program. I just added another except clause to the try-except I already had wrapped around that library call and that took care of the issue. In Rust, I'd have to find a whole new crate because I have no ability to stop this one from crashing everything around it.
Pushing stuff outside the standard library
Rust deliberately maintains a small standard library. The devs are concerned about the commitment of adding things that "must remain as-is until the end of time."
This basically forces me into a world where I have to get 50 billion crates with different design philosophies and different ways of doing things to play nicely with each other. It forces me into a world where any one of those crates can and will be abandoned at a moment's notice; I'll probably have to find replacements for everything every few years. And it puts me at the mercy of whoever developed those crates, who has the language's blessing to terminate my program if they feel like it.
Making more stuff standard would guarantee a consistent design philosophy, provide stronger assurance that things won't panic every three lines, and mean that yes, I can use that language feature as long as the language itself is around (assuming said feature doesn't get deprecated, but even then I'd have enough notice to find something else.)
Testing is painful
Tests are definitively second class citizens in Rust. Unit tests are expected to sit in the same file as the production code they're testing. What?
There's no way to tag tests to run groups of tests later; tests can be run singly, using a wildcard match on the test function name, or can be ignored entirely using [ignore]. That's it.
Language style
This one's subjective. I expect to take some flak for this and that's okay.
submitted by crab1122334 to rust [link] [comments]

Red Hat OpenShift Container Platform Instruction Manual for Windows Powershell

Introduction to the manual
This manual is made to guide you step by step in setting up an OpenShift cloud environment on your own device. It will tell you what needs to be done, when it needs to be done, what you will be doing and why you will be doing it, all in one convenient manual that is made for Windows users. Although if you'd want to try it on Linux or MacOS we did add the commands necesary to get the CodeReady Containers to run on your operating system. Be warned however there are some system requirements that are necessary to run the CodeReady Containers that we will be using. These requirements are specified within chapter Minimum system requirements.
This manual is written for everyone with an interest in the Red Hat OpenShift Container Platform and has at least a basic understanding of the command line within PowerShell on Windows. Even though it is possible to use most of the manual for Linux or MacOS we will focus on how to do this within Windows.
If you follow this manual you will be able to do the following items by yourself:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying the Mediawiki application
What is the OpenShift Container platform?
Red Hat OpenShift is a cloud development Platform as a Service (PaaS). It enables developers to develop and deploy their applications on a cloud infrastructure. It is based on the Kubernetes platform and is widely used by developers and IT operations worldwide. The OpenShift Container platform makes use of CodeReady Containers. CodeReady Containers are pre-configured containers that can be used for developing and testing purposes. There are also CodeReady Workspaces, these workspaces are used to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.
The OpenShift Container Platform is widely used because it helps the programmers and developers make their application faster because of CodeReady Containers and CodeReady Workspaces and it also allows them to test their application in the same environment. One of the advantages provided by OpenShift is the efficient container orchestration. This allows for faster container provisioning, deploying and management. It does this by streamlining and automating the automation process.
What knowledge is required or recommended to proceed with the installation?
To be able to follow this manual some knowledge is mandatory, because most of the commands are done within the Command Line interface it is necessary to know how it works and how you can browse through files/folders. If you either don’t have this basic knowledge or have trouble with the basic Command Line Interface commands from PowerShell, then a cheat sheet might offer some help. We recommend the following cheat sheet for windows:
Https://www.sans.org/security-resources/sec560/windows\_command\_line\_sheet\_v1.pdf
Another option is to read through the operating system’s documentation or introduction guides. Though the documentation can be overwhelming by the sheer amount of commands.
Microsoft: https://docs.microsoft.com/en-us/windows-serveadministration/windows-commands/windows-commands
MacOS
Https://www.makeuseof.com/tag/mac-terminal-commands-cheat-sheet/
Linux
https://ubuntu.com/tutorials/command-line-for-beginners#2-a-brief-history-lesson https://www.guru99.com/linux-commands-cheat-sheet.html
http://cc.iiti.ac.in/docs/linuxcommands.pdf
Aside from the required knowledge there are also some things that can be helpful to know just to make the use of OpenShift a bit simpler. This consists of some general knowledge on PaaS like Dockers and Kubernetes.
Docker https://www.docker.com/
Kubernetes https://kubernetes.io/

System requirements

Minimum System requirements

The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum hardware:
Hardware requirements
Code Ready Containers requires the following system resources:
● 4 virtual CPU’s
● 9 GB of free random-access memory
● 35 GB of storage space
● Physical CPU with Hyper-V (intel) or SVM mode (AMD) this has to be enabled in the bios
Software requirements
The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum operating system requirements:
Microsoft Windows
On Microsoft Windows, the Red Hat OpenShift CodeReady Containers requires the Windows 10 Pro Fall Creators Update (version 1709) or newer. CodeReady Containers does not work on earlier versions or other editions of Microsoft Windows. Microsoft Windows 10 Home Edition is not supported.
macOS
On macOS, the Red Hat OpenShift CodeReady Containers requires macOS 10.12 Sierra or newer.
Linux
On Linux, the Red Hat OpenShift CodeReady Containers is only supported on Red Hat Enterprise Linux/CentOS 7.5 or newer and on the latest two stable Fedora releases.
When using Red Hat Enterprise Linux, the machine running CodeReady Containers must be registered with the Red Hat Customer Portal.
Ubuntu 18.04 LTS or newer and Debian 10 or newer are not officially supported and may require manual set up of the host machine.

Required additional software packages for Linux

The CodeReady Containers on Linux require the libvirt and Network Manager packages to run. Consult the following table to find the command used to install these packages for your Linux distribution:
Table 1.1 Package installation commands by distribution
Linux Distribution Installation command
Fedora Sudo dnf install NetworkManager
Red Hat Enterprise Linux/CentOS Su -c 'yum install NetworkManager'
Debian/Ubuntu Sudo apt install qemu-kvm libvirt-daemonlibvirt-daemon-system network-manage

Installation

Getting started with the installation

To install CodeReady Containers a few steps must be undertaken. Because an OpenShift account is necessary to use the application this will be the first step. An account can be made on “https://www.openshift.com/”, where you need to press login and after that select the option “Create one now”
After making an account the next step is to download the latest release of CodeReady Containers and the pulled secret on “https://cloud.redhat.com/openshift/install/crc/installer-provisioned”. Make sure to download the version corresponding to your platform and/or operating system. After downloading the right version, the contents have to be extracted from the archive to a location in your $PATH. The pulled secret should be saved because it is needed later.
The command line interface has to be opened before we can continue with the installation. For windows we will use PowerShell. All the commands we use during the installation procedure of this guide are going to be done in this command line interface unless stated otherwise. To be able to run the commands within the command line interface, use the command line interface to go to the location in your $PATH where you extracted the CodeReady zip.
If you have installed an outdated version and you wish to update, then you can delete the existing CodeReady Containers virtual machine with the $crc delete command. After deleting the container, you must replace the old crc binary with a newly downloaded binary of the latest release.
C:\Users\[username]\$PATH>crc delete 
When you have done the previous steps please confirm that the correct and up to date crc binary is in use by checking it with the $crc version command, this should provide you with the version that is currently installed.
C:\Users\[username]\$PATH>crc version 
To set up the host operating system for the CodeReady Containers virtual machine you have to run the $crc setup command. After running crc setup, crc start will create a minimal OpenShift 4 cluster in the folder where the executable is located.
C:\Users\[username]>crc setup 

Setting up CodeReady Containers

Now we need to set up the new CodeReady Containers release with the $crc setup command. This command will perform the operations necessary to run the CodeReady Containers and create the ~/.crc directory if it did not previously exist. In the process you have to supply your pulled secret, once this process is completed you have to reboot your system. When the system has restarted you can start the new CodeReady Containers virtual machine with the $crc start command. The $crc start command starts the CodeReady virtual machine and OpenShift cluster.
You cannot change the configuration of an existing CodeReady Containers virtual machine. So if you have a CodeReady Containers virtual machine and you want to make configuration changes you need to delete the virtual machine with the $crc delete command and create a new virtual machine and start that one with the configuration changes. Take note that deleting the virtual machine will also delete the data stored in the CodeReady Containers. So, to prevent data loss we recommend you save the data you wish to keep. Also keep in mind that it is not necessary to change the default configuration to start OpenShift.
C:\Users\[username]\$PATH>crc setup 
Before starting the machine, you need to keep in mind that it is not possible to make any changes to the virtual machine. For this tutorial however it is not necessary to change the configuration, if you don’t want to make any changes please continue by starting the machine with the crc start command.
C:\Users\[username]\$PATH>crc start 
\ it is possible that you will get a Nameserver error later on, if this is the case please start it with* crc start -n 1.1.1.1

Configuration

It is not is not necessary to change the default configuration and continue with this tutorial, this chapter is here for those that wish to do so and know what they are doing. However, for MacOS and Linux it is necessary to change the dns settings.

Configuring the CodeReady Containers

To start the configuration of the CodeReady Containers use the command crc config. This command allows you to configure the crc binary and the CodeReady virtual machine. The command has some requirements before it’s able to configure. This requirement is a subcommand, the available subcommands for this binary and virtual machine are:
get, this command allows you to see the values of a configurable property
set/unset, this command can be used for 2 things. To display the names of, or to set and/or unset values of several options and parameters. These parameters being:
○ Shell options
○ Shell attributes
○ Positional parameters
view, this command starts the configuration in read-only mode.
These commands need to operate on named configurable properties. To list all the available properties, you can run the command $crc config --help.
Throughout this manual we will use the $crc config command a few times to change some properties needed for the configuration.
There is also the possibility to use the crc config command to configure the behavior of the checks that’s done by the $crc start end $crc setup commands. By default, the startup checks will stop with the process if their conditions are not met. To bypass this potential issue, you can set the value of a property that starts with skip-check or warn-check to true to skip the check or warning instead of ending up with an error.
C:\Users\[username]\$PATH>crc config get C:\Users\[username]\$PATH>crc config set C:\Users\[username]\$PATH>crc config unset C:\Users\[username]\$PATH>crc config view C:\Users\[username]\$PATH>crc config --help 

Configuring the Virtual Machine

You can use the CPUs and memory properties to configure the default number of vCPU’s and amount of memory available for the virtual machine.
To increase the number of vCPU’s available to the virtual machine use the $crc config set CPUs . Keep in mind that the default number for the CPU’s is 4 and the number of vCPU’s you wish to assign must be equal or greater than the default value.
To increase the memory available to the virtual machine, use the $crc config set memory . Keep in mind that the default number for the memory is 9216 Mebibytes and the amount of memory you wish to assign must be equal or greater than the default value.
C:\Users\[username]\$PATH>crc config set CPUs  C:\Users\[username]\$PATH>crc config set memory > 

Configuring the DNS

Window / General DNS setup

There are two domain names used by the OpenShift cluster that are managed by the CodeReady Containers, these are:
crc.testing, this is the domain for the core OpenShift services.
apps-crc.testing, this is the domain used for accessing OpenShift applications that are deployed on the cluster.
Configuring the DNS settings in Windows is done by executing the crc setup. This command automatically adjusts the DNS configuration on the system. When executing crc start additional checks to verify the configuration will be executed.

macOS DNS setup

MacOS expects the following DNS configuration for the CodeReady Containers
● The CodeReady Containers creates a file that instructs the macOS to forward all DNS requests for the testing domain to the CodeReady Containers virtual machine. This file is created at /etc/resolvetesting.
● The oc binary requires the following CodeReady Containers entry to function properly, api.crc.testing adds an entry to /etc/hosts pointing at the VM IPaddress.

Linux DNS setup

CodeReady containers expect a slightly different DNS configuration. CodeReady Container expects the NetworkManager to manage networking. On Linux the NetworkManager uses dnsmasq through a configuration file, namely /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf.
To set it up properly the dnsmasq instance has to forward the requests for crc.testing and apps-crc.testing domains to “192.168.130.11”. In the /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf this will look like the following:
● Server=/crc. Testing/192.168.130.11
● Server=/apps-crc. Testing/192.168.130.11

Accessing the Openshift Cluster

Accessing the Openshift web console

To gain access to the OpenShift cluster running in the CodeReady virtual machine you need to make sure that the virtual machine is running before continuing with this chapter. The OpenShift clusters can be accessed through the OpenShift web console or the client binary(oc).
First you need to execute the $crc console command, this command will open your web browser and direct a tab to the web console. After that, you need to select the htpasswd_provider option in the OpenShift web console and log in as a developer user with the output provided by the crc start command.
It is also possible to view the password for kubeadmin and developer users by running the $crc console --credentials command. While you can access the cluster through the kubeadmin and developer users, it should be noted that the kubeadmin user should only be used for administrative tasks such as user management and the developer user for creating projects or OpenShift applications and the deployment of these applications.
C:\Users\[username]\$PATH>crc console C:\Users\[username]\$PATH>crc console --credentials 

Accessing the OpenShift cluster with oc

To gain access to the OpenShift cluster with the use of the oc command you need to complete several steps.
Step 1.
Execute the $crc oc-env command to print the command needed to add the cached oc binary to your PATH:
C:\Users\[username]\$PATH>crc oc-env 
Step 2.
Execute the printed command. The output will look something like the following:
PS C:\Users\OpenShift> crc oc-env $Env:PATH = "CC:\Users\OpenShift\.crc\bin\oc;$Env:PATH" # Run this command to configure your shell: # & crc oc-env | Invoke-Expression 
This means we have to execute* the command that the output gives us, in this case that is:
C:\Users\[username]\$PATH>crc oc-env | Invoke-Expression 
\this has to be executed every time you start; a solution is to move the oc binary to the same path as the crc binary*
To test if this step went correctly execute the following command, if it returns without errors oc is set up properly
C:\Users\[username]\$PATH>.\oc 
Step 3
Now you need to login as a developer user, this can be done using the following command:
$oc login -u developer https://api.crc.testing:6443
Keep in mind that the $crc start will provide you with the password that is needed to login with the developer user.
C:\Users\[username]\$PATH>oc login -u developer https://api.crc.testing:6443 
Step 4
The oc can now be used to interact with your OpenShift cluster. If you for instance want to verify if the OpenShift cluster Operators are available, you can execute the command
$oc get co 
Keep in mind that by default the CodeReady Containers disables the functions provided by the commands $machine-config and $monitoringOperators.
C:\Users\[username]\$PATH>oc get co 

Demonstration

Now that you are able to access the cluster, we will take you on a tour through some of the possibilities within OpenShift Container Platform.
We will start by creating a project. Within this project we will import an image, and with this image we are going to build an application. After building the application we will explain how upscaling and downscaling can be used within the created application.
As the next step we will show the user how to make changes in the network route. We also show how monitoring can be used within the platform, however within the current version of CodeReady Containers this has been disabled.
Lastly, we will show the user how to use user management within the platform.

Creating a project

To be able to create a project within the console you have to login on the cluster. If you have not yet done this, this can be done by running the command crc console in the command line and logging in with the login data from before.
When you are logged in as admin, switch to Developer. If you're logged in as a developer, you don't have to switch. Switching between users can be done with the dropdown menu top left.
Now that you are properly logged in press the dropdown menu shown in the image below, from there click on create a project.
https://preview.redd.it/ytax8qocitv51.png?width=658&format=png&auto=webp&s=72d143733f545cf8731a3cca7cafa58c6507ace2
When you press the correct button, the following image will pop up. Here you can give your project a name and description. We chose to name it CodeReady with a displayname CodeReady Container.
https://preview.redd.it/vtaxadwditv51.png?width=594&format=png&auto=webp&s=e3b004bab39fb3b732d96198ed55fdd99259f210

Importing image

The Containers in OpenShift Container Platform are based on OCI or Docker formatted images. An image is a binary that contains everything needed to run a container as well as the metadata of the requirements needed for the container.
Within the OpenShift Container Platform it’s possible to obtain images in a number of ways. There is an integrated Docker registry that offers the possibility to download new images “on the fly”. In addition, OpenShift Container Platform can use third party registries such as:
- Https://hub.docker.com/
- Https://catalog.redhat.com/software/containers/search
Within this manual we are going to import an image from the Red Hat container catalog. In this example we’ll be using MediaWiki.
Search for the application in https://catalog.redhat.com/software/containers/search

https://preview.redd.it/c4mrbs0fitv51.png?width=672&format=png&auto=webp&s=f708f0542b53a9abf779be2d91d89cf09e9d2895
Navigate to “Get this image”
Follow the steps to “create a registry service account”, after that you can copy the YAML.
https://preview.redd.it/b4rrklqfitv51.png?width=1323&format=png&auto=webp&s=7a2eb14a3a1ba273b166e03e1410f06fd9ee1968
After the YAML has been copied we will go to the topology view and click on the YAML button
https://preview.redd.it/k3qzu8dgitv51.png?width=869&format=png&auto=webp&s=b1fefec67703d0a905b00765f0047fe7c6c0735b
Then we have to paste in the YAML, put in the name, namespace and your pull secret name (which you created through your registry account) and click on create.
https://preview.redd.it/iz48kltgitv51.png?width=781&format=png&auto=webp&s=4effc12e07bd294f64a326928804d9a931e4d2bd
Run the import command within powershell
$oc import-image openshift4/mediawiki --from=registry.redhat.io/openshift4/mediawiki --confirm imagestream.image.openshift.io/mediawiki imported 

Creating and managing an application

There are a few ways to create and manage applications. Within this demonstration we’ll show how to create an application from the previously imported image.

Creating the application

To create an image with the previously imported image go back to the console and topology. From here on select container image.
https://preview.redd.it/6506ea4iitv51.png?width=869&format=png&auto=webp&s=c0231d70bb16c76cd131e6b71256e93550cc8b37
For the option image you'll want to select the “image stream tag from internal registry” option. Give the application a name and then create the deployment.
https://preview.redd.it/tk72idniitv51.png?width=813&format=png&auto=webp&s=a4e662cf7b96604d84df9d04ab9b90b5436c803c
If everything went right during the creating process you should see the following, this means that the application is successfully running.
https://preview.redd.it/ovv9l85jitv51.png?width=901&format=png&auto=webp&s=f78f350207add0b8a979b6da931ff29ffa30128c

Scaling the application

In OpenShift there is a feature called autoscaling. There are two types of application scaling, namely vertical scaling, and horizontal scaling. Vertical scaling is adding only more CPU and hard disk and is no longer supported by OpenShift. Horizontal scaling is increasing the number of machines.
One of the ways to scale an application is by increasing the number of pods. This can be done by going to a pod within the view as seen in the previous step. By either pressing the up or down arrow more pods of the same application can be added. This is similar to horizontal scaling and can result in better performance when there are a lot of active users at the same time.
https://preview.redd.it/s6i1vbcrltv51.png?width=602&format=png&auto=webp&s=e62cbeeed116ba8c55704d61a990fc0d8f3cfaa1
In the picture above we see the number of nodes and pods and how many resources those nodes and pods are using. This is something to keep in mind if you want to scale up your application, the more you scale it up, the more resources it will take up.

https://preview.redd.it/quh037wmitv51.png?width=194&format=png&auto=webp&s=5e326647b223f3918c259b1602afa1b5fbbeea94

Network

Since OpenShift Container platform is built on Kubernetes it might be interesting to know some theory about its networking. Kubernetes, on which the OpenShift Container platform is built, ensures that the Pods within OpenShift can communicate with each other via the network and assigns them their own IP address. This makes all containers within the Pod behave as if they were on the same host. By giving each pod its own IP address, pods can be treated as physical hosts or virtual machines in terms of port mapping, networking, naming, service discovery, load balancing, application configuration and migration. To run multiple services such as front-end and back-end services, OpenShift Container Platform has a built-in DNS.
One of the changes that can be made to the networking of a Pod is the Route. We’ll show you how this can be done in this demonstration.
The Route is not the only thing that can be changed and or configured. Two other options that might be interesting but will not be demonstrated in this manual are:
- Ingress controller, Within OpenShift it is possible to set your own certificate. A user must have a certificate / key pair in PEM-encoded files, with the certificate signed by a trusted authority.
- Network policies, by default all pods in a project are accessible from other pods and network locations. To isolate one or more pods in a project, it is possible to create Network Policy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete Network Policy objects within their own project.
There is a search function within the Container Platform. We’ll use this to search for the network routes and show how to add a new route.
https://preview.redd.it/8jkyhk8pitv51.png?width=769&format=png&auto=webp&s=9a8762df5bbae3d8a7c92db96b8cb70605a3d6da
You can add items that you use a lot to the navigation
https://preview.redd.it/t32sownqitv51.png?width=1598&format=png&auto=webp&s=6aab6f17bc9f871c591173493722eeae585a9232
For this example, we will add Routes to navigation.
https://preview.redd.it/pm3j7ljritv51.png?width=291&format=png&auto=webp&s=bc6fbda061afdd0780bbc72555d809b84a130b5b
Now that we’ve added Routes to the navigation, we can start the creation of the Route by clicking on “Create route”.
https://preview.redd.it/5lgecq0titv51.png?width=1603&format=png&auto=webp&s=d548789daaa6a8c7312a419393795b52da0e9f75
Fill in the name, select the service and the target port from the drop-down menu and click on Create.
https://preview.redd.it/qczgjc2uitv51.png?width=778&format=png&auto=webp&s=563f73f0dc548e3b5b2319ca97339e8f7b06c9d6
As you can see, we’ve successfully added the new route to our application.
https://preview.redd.it/gxfanp2vitv51.png?width=1588&format=png&auto=webp&s=1aae813d7ad0025f91013d884fcf62c5e7d109f1
Storage
OpenShift makes use of Persistent Storage, this type of storage uses persistent volume claims(PVC). PVC’s allow the developer to make persistent volumes without needing any knowledge about the underlying infrastructure.
Within this storage there are a few configuration options:
It is however important to know how to manually reclaim the persistent volumes, since if you delete PV the associated data will not be automatically deleted with it and therefore you cannot reassign the storage to another PV yet.
To manually reclaim the PV, you need to follow the following steps:
Step 1: Delete the PV, this can be done by executing the following command
$oc delete  
Step 2: Now you need to clean up the data on the associated storage asset
Step 3: Now you can delete the associated storage asset or if you with to reuse the same storage asset you can now create a PV with the storage asset definition.
It is also possible to directly change the reclaim policy within OpenShift, to do this you would need to follow the following steps:
Step 1: Get a list of the PVs in your cluster
$oc get pv 
This will give you a list of all the PV’s in your cluster and will display their following attributes: Name, Capacity, Accesmodes, Reclaimpolicy, Statusclaim, Storageclass, Reason and Age.
Step 2: Now choose the PV you wish to change and execute one of the following command’s, depending on your preferred policy:
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' 
In this example the reclaim policy will be changed to Retain.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Recycle"}}' 
In this example the reclaim policy will be changed to Recycle.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' 
In this example the reclaim policy will be changed to Delete.

Step 3: After this you can check the PV to verify the change by executing this command again:
$oc get pv 

Monitoring

Within Red Hat OpenShift there is the possibility to monitor the data that has been created by your containers, applications, and pods. To do so, click on the menu option in the top left corner. Check if you are logged in as Developer and click on “Monitoring”. Normally this function is not activated within the CodeReady containers, because it uses a lot of resources (Ram and CPU) to run.
https://preview.redd.it/an0wvn6zitv51.png?width=228&format=png&auto=webp&s=51abf8cc31bd763deb457d49514f99ee81d610ec
Once you have activated “Monitoring” you can change the “Time Range” and “Refresh Interval” in the top right corner of your screen. This will change the monitoring data on your screen.
https://preview.redd.it/e0yvzsh1jtv51.png?width=493&format=png&auto=webp&s=b2c563635cfa60ea7ce2f9c146aa994df6aa1c34
Within this function you can also monitor “Events”. These events are records of important information and are useful for monitoring and troubleshooting within the OpenShift Container Platform.
https://preview.redd.it/l90vkmp3jtv51.png?width=602&format=png&auto=webp&s=4e97f14bedaec7ededcdcda96e7823f77ced24c2

User management

According to the documentation of OpenShift is a user, an entity that interacts with the OpenShift Container Platform API. These can be a developer for developing applications or an administrator for managing the cluster. Users can be assigned to groups, which set the permissions applied to all the group’s members. For example, you can give API access to a group, which gives all members of the group API access.
There are multiple ways to create a user depending on the configured identity provider. The DenyAll identity provider is the default within OpenShift Container Platform. This default denies access for all the usernames and passwords.
First, we’re going to create a new user, the way this is done depends on the identity provider, this depends on the mapping method used as part of the identity provider configuration.
for more information on what mapping methods are and how they function:
https://docs.openshift.com/enterprise/3.1/install_config/configuring_authentication.html
With the default mapping method, the steps will be as following
$oc create user  
Next up, we’ll create an OpenShift Container Platform Identity. Use the name of the identity provider and the name that uniquely represents this identity in the scope of the identity provider:
$oc create identity : 
The is the name of the identity provider in the master configuration. For example, the following commands create an Identity with identity provider ldap_provider and the identity provider username mediawiki_s.
$oc create identity ldap_provider:mediawiki_s 
Create a useidentity mapping for the created user and identity:
$oc create useridentitymapping :  
For example, the following command maps the identity to the user:
$oc create useridentitymapping ldap_provider:mediawiki_s mediawiki 
Now were going to assign a role to this new user, this can be done by executing the following command:
$oc create clusterrolebinding  \ --clusterrole= --user= 
There is a --clusterrole option that can be used to give the user a specific role, like a cluster user with admin privileges. The cluster admin has access to all files and is able to manage the access level of other users.
Below is an example of the admin clusterrole command:
$oc create clusterrolebinding registry-controller \ --clusterrole=cluster-admin --user=admin 

What did you achieve?

If you followed all the steps within this manual you now should have a functioning Mediawiki Application running on your own CodeReady Containers. During the installation of this application on CodeReady Containers you have learned how to do the following things:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying an application
● Creating new users
With these skills you’ll be able to set up your own Container Platform environment and host applications of your choosing.

Troubleshooting

Nameserver
There is the possibility that your CodeReady container can't connect to the internet due to a Nameserver error. When this is encountered a working fix for us was to stop the machine and then start the CRC machine with the following command:
C:\Users\[username]\$PATH>crc start -n 1.1.1.1 
Hyper-V admin
Should you run into a problem with Hyper-V it might be because your user is not an admin and therefore can’t access the Hyper-V admin user group.
  1. Click Start > Control Panel > Administration Tools > Computer Management. The Computer Management window opens.
  2. Click System Tools > Local Users and Groups > Groups. The list of groups opens.
  3. Double-click the Hyper-V Administrators group. The Hyper-V Administrators Properties window opens.
  4. Click Add. The Select Users or Groups window opens.
  5. In the Enter the object names to select field, enter the user account name to whom you want to assign permissions, and then click OK.
  6. Click Apply, and then click OK.

Terms and definitions

These terms and definitions will be expanded upon, below you can see an example of how this is going to look like together with a few terms that will require definitions.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Openshift is based on Kubernetes.
Clusters are a collection of multiple nodes which communicate with each other to perform a set of operations.
Containers are the basic units of OpenShift applications. These container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources.
CodeReady Container is a minimal, preconfigured cluster that is used for development and testing purposes.
CodeReady Workspaces uses Kubernetes and containers to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.

Sources

  1. https://www.ibm.com/support/knowledgecenteen/SSMKFH/com.ibm.apmaas.doc/install/hyperv_config_add_nonadmin_user_hyperv_usergroup.html
  2. https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/
  3. https://docs.openshift.com/container-platform/3.11/admin_guide/manage_users.html
submitted by Groep6HHS to openshift [link] [comments]

Node.js Application Monitoring with Prometheus and Grafana

Hi guys, we published this article on our blog (here) some time ago and I thought it could be interesting for node to read is as well, since we got some good feedback on it!

What is application monitoring and why is it necessary?

Application monitoring is a method that uses software tools to gain insights into your software deployments. This can be achieved by simple health checks to see if the server is available to more advanced setups where a monitoring library is integrated into your server that sends data to a dedicated monitoring service. It can even involve the client side of your application, offering more detailed insights into the user experience.
For every developer, monitoring should be a crucial part of the daily work, because you need to know how the software behaves in production. You can let your testers work with your system and try to mock interactions or high loads, but these techniques will never be the same as the real production workload.

What is Prometheus and how does it work?

Prometheus is an open-source monitoring system that was created in 2012 by Soundcloud. In 2016, Prometheus became the second project (following Kubernetes) to be hosted by the Cloud Native Computing Foundation.
https://preview.redd.it/8kshgh0qpor51.png?width=1460&format=png&auto=webp&s=455c37b1b1b168d732e391a882598e165c42501a
The Prometheus server collects metrics from your servers and other monitoring targets by pulling their metric endpoints over HTTP at a predefined time interval. For ephemeral and batch jobs, for which metrics can't be scraped periodically due to their short-lived nature, Prometheus offers a Pushgateway. This is an intermediate server that monitoring targets can push their metrics before exiting. The data is retained there until the Prometheus server pulls it later.
The core data structure of Prometheus is the time series, which is essentially a list of timestamped values that are grouped by metric.
With PromQL (Prometheus Query Language), Prometheus provides a functional query language allowing for selection and aggregation of time series data in real-time. The result of a query can be viewed directly in the Prometheus web UI, or consumed by external systems such as Grafana via the HTTP API.

How to use prom-client to export metrics in Node.js for Prometheus?

prom-client is the most popular Prometheus client library for Node.js. It provides the building blocks to export metrics to Prometheus via the pull and push methods and supports all Prometheus metric types such as histogram, summaries, gauges and counters.

Setup sample Node.js project

Create a new directory and set up the Node.js project:
$ mkdir example-nodejs-app $ cd example-nodejs-app $ npm init -y 

Install prom-client

The prom-client npm module can be installed via:
$ npm install prom-client 

Exposing default metrics

Every Prometheus client library comes with predefined default metrics that are assumed to be good for all applications on the specific runtime. The prom-client library also follows this convention. The default metrics are useful for monitoring the usage of resources such as memory and CPU.
You can capture and expose the default metrics with following code snippet:
const http = require('http') const url = require('url') const client = require('prom-client') // Create a Registry which registers the metrics const register = new client.Registry() // Add a default label which is added to all metrics register.setDefaultLabels({ app: 'example-nodejs-app' }) // Enable the collection of default metrics client.collectDefaultMetrics({ register }) // Define the HTTP server const server = http.createServer(async (req, res) => { // Retrieve route from request object const route = url.parse(req.url).pathname if (route === '/metrics') { // Return all metrics the Prometheus exposition format res.setHeader('Content-Type', register.contentType) res.end(register.metrics()) } }) // Start the HTTP server which exposes the metrics on http://localhost:8080/metrics server.listen(8080) 

Exposing custom metrics

While default metrics are a good starting point, at some point, you’ll need to define custom metrics in order to stay on top of things.
Capturing and exposing a custom metric for HTTP request durations might look like this:
const http = require('http') const url = require('url') const client = require('prom-client') // Create a Registry which registers the metrics const register = new client.Registry() // Add a default label which is added to all metrics register.setDefaultLabels({ app: 'example-nodejs-app' }) // Enable the collection of default metrics client.collectDefaultMetrics({ register }) // Create a histogram metric const httpRequestDurationMicroseconds = new client.Histogram({ name: 'http_request_duration_seconds', help: 'Duration of HTTP requests in microseconds', labelNames: ['method', 'route', 'code'], buckets: [0.1, 0.3, 0.5, 0.7, 1, 3, 5, 7, 10] }) // Register the histogram register.registerMetric(httpRequestDurationMicroseconds) // Define the HTTP server const server = http.createServer(async (req, res) => { // Start the timer const end = httpRequestDurationMicroseconds.startTimer() // Retrieve route from request object const route = url.parse(req.url).pathname if (route === '/metrics') { // Return all metrics the Prometheus exposition format res.setHeader('Content-Type', register.contentType) res.end(register.metrics()) } // End timer and add labels end({ route, code: res.statusCode, method: req.method }) }) // Start the HTTP server which exposes the metrics on http://localhost:8080/metrics server.listen(8080) 
Copy the above code into a file called server.jsand start the Node.js HTTP server with following command:
$ node server.js 
You should now be able to access the metrics via http://localhost:8080/metrics.

How to scrape metrics from Prometheus

Prometheus is available as Docker image and can be configured via a YAML file.
Create a configuration file called prometheus.ymlwith following content:
global: scrape_interval: 5s scrape_configs: - job_name: "example-nodejs-app" static_configs: - targets: ["docker.for.mac.host.internal:8080"] 
The config file tells Prometheus to scrape all targets every 5 seconds. The targets are defined under scrape_configs. On Mac, you need to use docker.for.mac.host.internal as host, so that the Prometheus Docker container can scrape the metrics of the local Node.js HTTP server. On Windows, use docker.for.win.localhost and for Linux use localhost.
Use the docker run command to start the Prometheus Docker container and mount the configuration file (prometheus.yml):
$ docker run --rm -p 9090:9090 \ -v `pwd`/prometheus.yml:/etc/prometheus/prometheus.yml \ prom/prometheus:v2.20.1 
Windows users need to replace pwd with the path to their current working directory.
You should now be able to access the Prometheus Web UI on http://localhost:9090

What is Grafana and how does it work?

Grafana is a web application that allows you to visualize data sources via graphs or charts. It comes with a variety of chart types, allowing you to choose whatever fits your monitoring data needs. Multiple charts are grouped into dashboards in Grafana, so that multiple metrics can be viewed at once.
https://preview.redd.it/vt8jwu8vpor51.png?width=3584&format=png&auto=webp&s=4101843c84cfc6293debcdfc3bdbe70811dab2e9
The metrics displayed in the Grafana charts come from data sources. Prometheus is one of the supported data sources for Grafana, but it can also use other systems, like AWS CloudWatch, or Azure Monitor.
Grafana also allows you to define alerts that will be triggered if certain issues arise, meaning you’ll receive an email notification if something goes wrong. For a more advanced alerting setup checkout the Grafana integration for Opsgenie.

Starting Grafana

Grafana is also available as Docker container. Grafana datasources can be configured via a configuration file.
Create a configuration file called datasources.ymlwith the following content:
apiVersion: 1 datasources: - name: Prometheus type: prometheus access: proxy orgId: 1 url: http://docker.for.mac.host.internal:9090 basicAuth: false isDefault: true editable: true 
The configuration file specifies Prometheus as a datasource for Grafana. Please note that on Mac, we need to use docker.for.mac.host.internal as host, so that Grafana can access Prometheus. On Windows, use docker.for.win.localhost and for Linux use localhost.
Use the following command to start a Grafana Docker container and to mount the configuration file of the datasources (datasources.yml). We also pass some environment variables to disable the login form and to allow anonymous access to Grafana:
$ docker run --rm -p 3000:3000 \ -e GF_AUTH_DISABLE_LOGIN_FORM=true \ -e GF_AUTH_ANONYMOUS_ENABLED=true \ -e GF_AUTH_ANONYMOUS_ORG_ROLE=Admin \ -v `pwd`/datasources.yml:/etc/grafana/provisioning/datasources/datasources.yml \ grafana/grafana:7.1.5 
Windows users need to replace pwd with the path to their current working directory.
You should now be able to access the Grafana Web UI on http://localhost:3000

Configuring a Grafana Dashboard

Once the metrics are available in Prometheus, we want to view them in Grafana. This requires creating a dashboard and adding panels to that dashboard:
  1. Go to the Grafana UI at http://localhost:3000, click the + button on the left, and select Dashboard.
  2. In the new dashboard, click on the Add new panel button.
  3. In the Edit panel view, you can select a metric and configure a chart for it.
  4. The Metrics drop-down on the bottom left allows you to choose from the available metrics. Let’s use one of the default metrics for this example.
  5. Type process_resident_memory_bytesinto the Metricsinput and {{app}}into the Legendinput.
  6. On the right panel, enter Memory Usage for the Panel title.
  7. As the unit of the metric is in bytes we need to select bytes(Metric)for the left y-axis in the Axes section, so that the chart is easy to read for humans.
You should now see a chart showing the memory usage of the Node.js HTTP server.
Press Apply to save the panel. Back on the dashboard, click the small "save" symbol at the top right, a pop-up will appear allowing you to save your newly created dashboard for later use.

Setting up alerts in Grafana

Since nobody wants to sit in front of Grafana all day watching and waiting to see if things go wrong, Grafana allows you to define alerts. These alerts regularly check whether a metric adheres to a specific rule, for example, whether the errors per second have exceeded a specific value.
Alerts can be set up for every panel in your dashboards.
  1. Go into the Grafana dashboard we just created.
  2. Click on a panel title and select edit.
  3. Once in the edit view, select "Alerts" from the middle tabs, and press the Create Alertbutton.
  4. In the Conditions section specify 42000000 after IS ABOVE. This tells Grafana to trigger an alert when the Node.js HTTP server consumes more than 42 MB Memory.
  5. Save the alert by pressing the Apply button in the top right.

Sample code repository

We created a code repository that contains a collection of Docker containers with Prometheus, Grafana, and a Node.js sample application. It also contains a Grafana dashboard, which follows the RED monitoring methodology.
Clone the repository:
$ git clone https://github.com/coder-society/nodejs-application-monitoring-with-prometheus-and-grafana.git 
The JavaScript code of the Node.js app is located in the /example-nodejs-app directory. All containers can be started conveniently with docker-compose. Run the following command in the project root directory:
$ docker-compose up -d 
After executing the command, a Node.js app, Grafana, and Prometheus will be running in the background. The charts of the gathered metrics can be accessed and viewed via the Grafana UI at http://localhost:3000/d/1DYaynomMk/example-service-dashboard.
To generate traffic for the Node.js app, we will use the ApacheBench command line tool, which allows sending requests from the command line.
On MacOS, it comes pre-installed by default. On Debian-based Linux distributions, ApacheBench can be installed with the following command:
$ apt-get install apache2-utils 
For Windows, you can download the binaries from Apache Lounge as a ZIP archive. ApacheBench will be named ab.exe in that archive.
This CLI command will run ApacheBench so that it sends 10,000 requests to the /order endpoint of the Node.js app:
$ ab -m POST -n 10000 -c 100 http://localhost:8080/order 
Depending on your hardware, running this command may take some time.
After running the ab command, you can access the Grafana dashboard via http://localhost:3000/d/1DYaynomMk/example-service-dashboard.

Summary

Prometheus is a powerful open-source tool for self-hosted monitoring. It’s a good option for cases in which you don’t want to build from scratch but also don’t want to invest in a SaaS solution.
With a community-supported client library for Node.js and numerous client libraries for other languages, the monitoring of all your systems can be bundled into one place.
Its integration is straightforward, involving just a few lines of code. It can be done directly for long-running services or with help of a push server for short-lived jobs and FaaS-based implementations.
Grafana is also an open-source tool that integrates well with Prometheus. Among the many benefits it offers are flexible configuration, dashboards that allow you to visualize any relevant metric, and alerts to notify of any anomalous behavior.
These two tools combined offer a straightforward way to get insights into your systems. Prometheus offers huge flexibility in terms of metrics gathered and Grafana offers many different graphs to display these metrics. Prometheus and Grafana also integrate so well with each other that it’s surprising they’re not part of one product.
You should now have a good understanding of Prometheus and Grafana and how to make use of them to monitor your Node.js projects in order to gain more insights and confidence in your software deployments.
submitted by matthevva to node [link] [comments]

another take on Getting into Devops as a Beginner

I really enjoyed m4nz's recent post: Getting into DevOps as a beginner is tricky - My 50 cents to help with it and wanted to do my own version of it, in hopes that it might help beginners as well. I agree with most of their advice and recommend folks check it out if you haven't yet, but I wanted to provide more of a simple list of things to learn and tools to use to compliment their solid advice.

Background

While I went to college and got a degree, it wasn't in computer science. I simply developed an interest in Linux and Free & Open Source Software as a hobby. I set up a home server and home theater PC before smart TV's and Roku were really a thing simply because I thought it was cool and interesting and enjoyed the novelty of it.
Fast forward a few years and basically I was just tired of being poor lol. I had heard on the now defunct Linux Action Show podcast about linuxacademy.com and how people had had success with getting Linux jobs despite not having a degree by taking the courses there and acquiring certifications. I took a course, got the basic LPI Linux Essentials Certification, then got lucky by landing literally the first Linux job I applied for at a consulting firm as a junior sysadmin.
Without a CS degree, any real experience, and 1 measly certification, I figured I had to level up my skills as quickly as possible and this is where I really started to get into DevOps tools and methodologies. I now have 5 years experience in the IT world, most of it doing DevOps/SRE work.

Certifications

People have varying opinions on the relevance and worth of certifications. If you already have a CS degree or experience then they're probably not needed unless their structure and challenge would be a good motivation for you to learn more. Without experience or a CS degree, you'll probably need a few to break into the IT world unless you know someone or have something else to prove your skills, like a github profile with lots of open source contributions, or a non-profit you built a website for or something like that. Regardless of their efficacy at judging a candidate's ability to actually do DevOps/sysadmin work, they can absolutely help you get hired in my experience.
Right now, these are the certs I would recommend beginners pursue. You don't necessarily need all of them to get a job (I got started with just the first one on this list), and any real world experience you can get will be worth more than any number of certs imo (both in terms of knowledge gained and in increasing your prospects of getting hired), but this is a good starting place to help you plan out what certs you want to pursue. Some hiring managers and DevOps professionals don't care at all about certs, some folks will place way too much emphasis on them ... it all depends on the company and the person interviewing you. In my experience I feel that they absolutely helped me advance my career. If you feel you don't need them, that's cool too ... they're a lot of work so skip them if you can of course lol.

Tools and Experimentation

While certs can help you get hired, they won't make you a good DevOps Engineer or Site Reliability Engineer. The only way to get good, just like with anything else, is to practice. There are a lot of sub-areas in the DevOps world to specialize in ... though in my experience, especially at smaller companies, you'll be asked to do a little (or a lot) of all of them.
Though definitely not exhaustive, here's a list of tools you'll want to gain experience with both as points on a resume and as trusty tools in your tool belt you can call on to solve problems. While there is plenty of "resume driven development" in the DevOps world, these tools are solving real problems that people encounter and struggle with all the time, i.e., you're not just learning them because they are cool and flashy, but because not knowing and using them is a giant pain!
There are many, many other DevOps tools I left out that are worthwhile (I didn't even touch the tools in the kubernetes space like helm and spinnaker). Definitely don't stop at this list! A good DevOps engineer is always looking to add useful tools to their tool belt. This industry changes so quickly, it's hard to keep up. That's why it's important to also learn the "why" of each of these tools, so that you can determine which tool would best solve a particular problem. Nearly everything on this list could be swapped for another tool to accomplish the same goals. The ones I listed are simply the most common/popular and so are a good place to start for beginners.

Programming Languages

Any language you learn will be useful and make you a better sysadmin/DevOps Eng/SRE, but these are the 3 I would recommend that beginners target first.

Expanding your knowledge

As m4nz correctly pointed out in their post, while knowledge of and experience with popular DevOps tools is important; nothing beats in-depth knowledge of the underlying systems. The more you can learn about Linux, operating system design, distributed systems, git concepts, language design, networking (it's always DNS ;) the better. Yes, all the tools listed above are extremely useful and will help you do your job, but it helps to know why we use those tools in the first place. What problems are they solving? The solutions to many production problems have already been automated away for the most part: kubernetes will restart a failed service automatically, automated testing catches many common bugs, etc. ... but that means that sometimes the solution to the issue you're troubleshooting will be quite esoteric. Occam's razor still applies, and it's usually the simplest explanation that works; but sometimes the problem really is at the kernel level.
The biggest innovations in the IT world are generally ones of abstractions: config management abstracts away tedious server provisioning, cloud providers abstract away the data center, containers abstract away the OS level, container orchestration abstracts away the node and cluster level, etc. Understanding what it happening beneath each layer of abstraction is crucial. It gives you a "big picture" of how everything fits together and why things are the way they are; and it allows you to place new tools and information into the big picture so you'll know why they'd be useful or whether or not they'd work for your company and team before you've even looked in-depth at them.
Anyway, I hope that helps. I'll be happy to answer any beginnegetting started questions that folks have! I don't care to argue about this or that point in my post, but if you have a better suggestion or additional advice then please just add it here in the comments or in your own post! A good DevOps Eng/SRE freely shares their knowledge so that we can all improve.
submitted by jamabake to devops [link] [comments]

Your /r/javascript recap for the week of August 24 - August 30

Monday, August 24 - Sunday, August 30

Top Posts

score comments title & link
441 34 comments ztext.js - a clever new JS library (3.9 kb) that makes any font 3D
438 107 comments TIL, "JavaScript" is a trademark of Oracle Corporation in the United States
335 30 comments Visualize your Data Structures in VS Code
269 16 comments Making WAVs: Understanding a Binary File Format by Parsing and Creating WAV Files from Scratch in JavaScript
232 135 comments Why I Don’t Use GraphQL Anymore
217 12 comments ePaper.js - Node.js library for easily creating an ePaper display on a Raspberry PI using HTML and Javascript
183 67 comments I created a plugin for ESLint that sorts imports in a beautiful way
148 23 comments I built a website where you can guess the total number of npm dependencies and also display them in a tree view
143 6 comments React Internals (Part 2) - Reconciliation algorithm until React 15
141 19 comments Probably more than what you want to know about node shebang (medium, not paywalled)
 

Most Commented Posts

score comments title & link
18 38 comments [AskJS] [AskJS] Is it industry practice NOT to handle network errors?
78 31 comments Midway Serverless - A Node.js framework for Serverless - Interview with Harry Chen
32 29 comments [AskJS] [AskJS] How do you guys expose internals of a module for testing without adding it to the API surface?
39 25 comments Setting up a Micro Frontend architecture with Vue and single-spa
6 24 comments [AskJS] [AskJS] object destructuring vs dot notation. Performance and cohesiveness.
 

Top Ask JS

score comments title & link
11 19 comments [AskJS] [AskJS] To Deno, or Not to Deno?
10 14 comments [AskJS] [AskJS] When are service workers worth it?
2 19 comments [AskJS] [AskJS] Is RPC the future?
 

Top Showoffs

score comment
3 samdawsondev said Wrote an article on [How not to GraphQL](https://www.samdawson.dev/article/how-not-to-graphql)
3 Jaskys said Rebuilt my portfolio recently, would like to get some feedback https://dev.jaska.dev/
3 hp4k1h5 said iexcli is somewhat stable now. would appreciate feedback. https://github.com/HP4k1h5/iexcli
 

Top Comments

score comment
274 anlumo said ECMAScript is the correct term which sadly nobody uses (probably because it’s so clunky).
92 596F75206E65726421 said JavaScript is a terrible name anyways. It implies it has something to do with Java. JS is nothing like Java other than the fact that they both use C style syntax.
80 ghostfacedcoder said GraphQL is an optimization, and like any optimization you trade one thing to get another. GraphQL makes it harder to build on the server: to a server dev they are an inherently worse option. But t...
77 OmnipotentMug said Testing private internals is a code smell. It's only public behavior that matters.
73 himdel said I would go package imports first, then local imports (./), all sorted by the from part.
 
submitted by subredditsummarybot to javascript [link] [comments]

what is this i just downloaded (youtube code?)

so this is kinda a wierd story. I was planning to restart my computer. (cant remember why) I spend most of my time watching youtube videos so i had alot of tabs open. So i was watching the videos then deleting the tab but not opening new tabs. So i was down 2 i think 1 it was a pretty long video so i tried to open a youtube home page tab just to look while i listened to the video. And this is a short exerp of what i got.





YouTube











submitted by inhuman7773 to techsupport [link] [comments]

more related issues


more related issues
in the conversion of old and new systems, the most difficult one is uuuuuuuuuuuuuuu.

  1. Among the following options, the one that does not belong to the combination of two parameters, one change and three combinations:
    the form control that can accept numerical data input is.

Internal gateway protocol is divided into: distance vector routing protocol, and hybrid routing protocol.

Firewall can prevent the transmission of infected software or files
among the following coupling types, the lowest coupling degree is ().

The () property of the Navigator object returns the platform and version information of the browser.

What are the main benefits of dividing IP subnets? ()
if users want to log in to the remote server and become a simulation terminal of the remote server temporarily, they can use the
[26-255] software life cycle provided by the remote host, which means that most operating systems, such as DOS, windows, UNIX, etc., adopt tree structureFolder structure.

An array is a group of memory locations related by the fact that they all have __________ name and __________ Type.
in Windows XP, none of the characters in the following () symbol set can form a file name. [2008 vocational college]
among the following options, the ones that do not belong to the characteristics of computer viruses are:
in the excel 2010 cell Format dialog box, the nonexistent tab is
the boys___ The teacher talked to are from class one.
for an ordered table with length of 18, if the binary search is used, the length of the search for the 15th element is ().

SRAM memory is______ Memory.

() is a website with certain complementary advantages. It places the logo or website name of the other party's website on its own website, and sets the hyperlink of each other's website, so that users can find their own website from the cooperative website and achieve the purpose of mutual promotion.

  1. Accounting qualification is managed by information technology ()
    which of the following devices can forward the communication between different VLANs?

The default port number of HTTP hypertext transfer protocol is:
forIn the development method of object, () will be the dominant standard modeling language in the field of object-oriented technology.

When you visit a website, what is the first page you see?

File D:\\ city.txt The content is as follows: Beijing Tianjin Shanghai Chongqing writes the following event process: privatesub form_ click() Dim InD Open \d:\\ city.txt \For input as ? 1 do while not EOF (1) line input ? 1, Ind loop close 1 print ind End Sub run the program, click the form, and the output result is.

When users use dial-up telephone lines to access the Internet, the most commonly used protocol is.

In the I2C system, the main device is usually taken by the MCU with I2C bus interface, and the slave device must have I2C bus interface.

The basic types of market research include ()
the function of the following program is: output all integers within 100 that can be divisible by 3 and have single digits of 6. What should be filled in the underline is (). 56b33287e4b0e85354c031b5. PNG
the infringement of the scope of intellectual property rights is:
multimedia system is a computer that can process sound and image interactivelySystem.

In order to allow files of different users to have the same file name, () is usually used in the file system.

The following () effects are not included in PowerPoint 2010 animation effects.

Macro virus can infect________ Documents.

The compiled Java program can be executed directly.

In PowerPoint, when adding text to a slide with AutoShape, how to indicate that text can be edited on the image when an AutoShape is selected ()
organizational units can put users, groups, computers and other units into the container of the active directory.

Ethernet in LAN adopts the combination technology of packet switching and circuit switching. ()
interaction designers need to design information architecture and interface details.

In the process of domain name resolution, the local domain name server queries the root domain name server by using the search method.

What stage of e-commerce system development life cycle does data collection and processing preparation belong to?

Use the "ellipse" tool on the Drawing toolbar of word, press the () key and drag the mouse to draw a circle.

The proportion of a country's reserve position in the IMF, including the convertible currency part of the share subscribed by Member States to the IMF, and the portion that can be paid in domestic currency, respectively.

  1. When installing Windows 7 operating system, the system disk partition must be in format before installation.

High rise buildings, public places of entertainment and other decoration, in order to prevent fire should be used____。 ()
with regard to the concept of area in OSPF protocol, what is wrong in the following statements is ()
suppose that the channel bandwidth is 4000Hz and the modulation is 256 different symbols. According to the Nyquist theorem, the data rate of the ideal channel is ()
which of the following is the original IEEE WLAN standard ()?

What is correct about data structure is:
the key deficiency of waterfall model is that ().

The software development mode with almost no product plan, schedule and formal development process is
in the following description of computers, the correct one is ﹥
Because human eyes are sensitive to chroma signal, the sampling frequency of luminance signal can be lower than that of chroma signal when video signal is digitized, so as to reduce the amount of digital video data.

[47-464] what is correct in the following statements is
ISO / IEC WG17 is responsible for the specific drafting, discussion, amendment, formulation, voting and publication of the final ISO international standards for iso14443, iso15693 and iso15693 contactless smart lock manufacturers smart card standards.

Examples of off - balance - sheet activities include _________

The correct description of microcomputer is ().

Business accident refers to the accident caused by the failure of operation mechanism of tourism service department. It can be divided into ().

IGMP Network AssociationWhat is the function of the discussion?

Using MIPS as the unit to measure the performance of the computer, it refers to the computer______

In the excel workbook, after executing the following code, the value of cell A3 of sheet 1 is________ Sub test1() dim I as integer for I = 1 to 5 Sheet1. Range (\ \ a \ \ & I) = I next inend sub
What are the characteristics of electronic payment compared with traditional payment?

When the analog signal is encoded by linear PCM, the sampling frequency is 8kHz, and the code energy control unit is 8 bits, then the information transmission rate is ()
  1. The incorrect discussion about the force condition of diesel engine connecting rod is.

Software testing can be endless.

The game software running on the windows platform of PC is sent to the mobile phone of Android system and can run normally.

The following is not true about the video.

The way to retain the data in the scope of request is ()
distribution provides the basis and support for the development of e-commerce.

  1. Which of the following belong to the content of quality control in the analysis
    1. During the operation of a program, the CNC system appears "soft limit switch overrun", which belongs to
    2. The wrong description of the gas pipe is ()
    3. The following statement is wrong: ()
    the TCP / IP protocol structure includes () layer.

Add the records in table a to table B, and keep the original records in table B. the query that should be used is.

For additives with product anti-counterfeiting certification mark, after confirming that the product is in conformity with the factory quality certificate and the real object, one copy () shall be taken and pasted on the ex factory quality certificate of the product and filed together.

() accounts are disabled by default.

A concept of the device to monitor a person's bioparameters is that it should.
  1. For the cephalic vein, the wrong description is
    an image with a resolution of 16 pixels × 16 pixels and a color depth of 8 bits, with the data capacity of at least______ Bytes. (0.3 points)
  2. What are the requirements for the power cord of hand-held electric tools?

In the basic mode of electronic payment, credit card belongs to () payment system.

The triode has three working states: amplification, saturation and cut-off. In the digital circuit, when the transistor is used as a switch, it works in two states of saturation or cut-off.

Read the attached article and answer the following: compared with today's music, those of the past
() refers to the subjective conditions necessary for the successful completion of an activity.

In the OSI reference model, what is above the network layer is_______ 。

The decision tree corresponding to binary search is not only a binary search tree, but also an ideal balanced binary tree. In order to guide the interconnection, interoperability and interoperability of computer networks, ISO has issued the OSI reference model, and its basic structure is divided into
26_______ It belongs to the information system operation document.

In C ? language, the following operators have the highest priority___ ?
the full Chinese name of BPR is ()
please read the following procedures: dmain() {int a = 5, B = 0, C = 0; if (a = B + C) printf (\ * * \ n \); else printf (\ $$n \);} the above programs
() software is not a common tool for web page making.

When a sends a message to B, in order to achieve security, a needs to encrypt the message with ().

The Linux exchange partition is used to save the visited web page files.

  1. Materials consumed by the basic workshop may be included in the () cost item.

The coverage of LAN is larger than that of Wan.

Regarding the IEEE754 standard of real number storage, the wrong description is______

Task 4: convert decimal number to binary, octal and hexadecimal number [Topic 1] (1134.84375) 10 = () 2=()8 = () 16
the purpose of image data compression is to ()
in IE browser, to view the frequently visited sites that have been saved, you need to click.

  1. When several companies jointly write a document, the document number of each company should be quoted in the header at the same time. ()
    assuming that the highest frequency of analog signal is 10MHz, and the sampling frequency must be greater than (), then the sample signal can not be distorted.

The incredible performing artist from Toronto.
in access, the relationship between a table and a database is.

In word 2010, the following statement about the initial drop is correct.

Interrupt service sub function does not need to be called in the program, but after applying for interrupt, the CPU automatically finds the corresponding program according to the interrupt number.

Normal view mode is the default view mode for word documents.

A common variable is defined as follows: Union data {int a; int b; float C;} data; how much memory space does the variable data occupy in VC6.0?

______ It is not a relational database management system.

In the basic model of decision support system, what is in the core position is:
among the following key factors of software outsourcing projects, () is the factor that affects the final product quality and production efficiency of software outsourcing.

Word Chinese textThe shortcut for copying is ().
submitted by Amanda2020-jumi to u/Amanda2020-jumi [link] [comments]

Problem with Unreal Engine's Nativization feature + Visual Studio 2017

Im using UE4.25.1 by the way.
I don't know why it won't allow me to package my game after enabling nativization. I also did add the assets to nativize in the array by manually setting up my blueprint for nativization via Class settings.
It packages and runs smoothly afterwards when nativization is disabled.

I'm getting the ff errors from my output log:
Took 184.0273603s to run UnrealBuildTool.exe, ExitCode=6
UATHelper: Packaging (Windows (64-bit)): UnrealBuildTool failed. See log for more details. (C:\Users\Zelijah\AppData\Roaming\Unreal Engine\AutomationTool\Logs\Z+[GAMES]+Epic+Games+UE_4.25\UBT-InMyHead-Win64-Development.txt)
UATHelper: Packaging (Windows (64-bit)): AutomationTool exiting with ExitCode=6 (6)
UATHelper: Packaging (Windows (64-bit)): BUILD FAILED
PackagingResults: Error: Unknown Error


Here's my full log text file:
AndroidPlatformFactory.RegisterBuildPlatforms: UnrealBuildTool.AndroidPlatformSDK has no valid SDK IOSPlatformFactory.RegisterBuildPlatforms: UnrealBuildTool.IOSPlatformSDK using manually installed SDK LinuxPlatformFactory.RegisterBuildPlatforms: UnrealBuildTool.LinuxPlatformSDK has no valid SDK WindowsPlatformFactory.RegisterBuildPlatforms: UnrealBuildTool.WindowsPlatformSDK using manually installed SDK BuildMode.Execute: Command line: "Z:\[GAMES]\Epic Games\UE_4.25\Engine\Binaries\DotNET\UnrealBuildTool.exe" InMyHead Win64 Development -Project="Z:\Zelijah Media Files\In My Head (feat. Hijo)\UE Assets\[Project]\InMyHead\InMyHead.uproject" "Z:\Zelijah Media Files\In My Head (feat. Hijo)\UE Assets\[Project]\InMyHead\InMyHead.uproject" -NoUBTMakefiles -remoteini="Z:\Zelijah Media Files\In My Head (feat. Hijo)\UE Assets\[Project]\InMyHead" -skipdeploy -Manifest="Z:\Zelijah Media Files\In My Head (feat. Hijo)\UE Assets\[Project]\InMyHead\Intermediate\Build\Manifest.xml" -NoHotReload -log="C:\Users\Zelijah\AppData\Roaming\Unreal Engine\AutomationTool\Logs\Z+[GAMES]+Epic+Games+UE_4.25\UBT-InMyHead-Win64-Development.txt" DynamicCompilation.RequiresCompilation: Compiling Z:\Zelijah Media Files\In My Head (feat. Hijo)\UE Assets\[Project]\InMyHead\Intermediate\Build\BuildRules\InMyHeadModuleRules.dll: Assembly does not exist WindowsPlatform.FindVSInstallDirs: Found Visual Studio installation: Z:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise (Product=Microsoft.VisualStudio.Product.Enterprise, Version=15.9.28307.1146, Sort=0) WindowsPlatform.FindToolChainDirs: Found Visual Studio toolchain: Z:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\VC\Tools\MSVC\14.16.27023 (Version=14.16.27040) WindowsPlatform.UpdateCachedWindowsSdks: Found Windows 8.1 SDK at C:\Program Files (x86)\Windows Kits\8.1 WindowsPlatform.EnumerateSdkRootDirs: Found Windows 10 SDK root at C:\Program Files (x86)\Windows Kits\10 (1) WindowsPlatform.EnumerateSdkRootDirs: Found Windows 10 SDK root at C:\Program Files (x86)\Windows Kits\10 (2) WindowsPlatform.UpdateCachedWindowsSdks: Found Universal CRT version 10.0.10240.0 at C:\Program Files (x86)\Windows Kits\10 WindowsPlatform.UpdateCachedWindowsSdks: Found Windows 10 SDK version 10.0.14393.0 at C:\Program Files (x86)\Windows Kits\10 WindowsPlatform.UpdateCachedWindowsSdks: Found Universal CRT version 10.0.14393.0 at C:\Program Files (x86)\Windows Kits\10 DynamicCompilation.RequiresCompilation: Compiling Z:\Zelijah Media Files\In My Head (feat. Hijo)\UE Assets\[Project]\InMyHead\Intermediate\Plugins\NativizedAssets\Windows\Game\Intermediate\Build\BuildRules\NativizedAssetsModuleRules.dll: Assembly does not exist UEBuildTarget.AddPlugin: Enabling plugin 'PythonScriptPlugin' (referenced via InMyHead.uproject) UEBuildTarget.AddPlugin: Enabling plugin 'NativizedAssets' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'Paper2D' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'AISupport' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'LightPropagationVolume' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'CameraShakePreviewer' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'ActorLayerUtilities' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'AnimationSharing' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'SignificanceManager' (referenced via default plugins -> AnimationSharing.uplugin) UEBuildTarget.AddPlugin: Enabling plugin 'CLionSourceCodeAccess' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'CodeLiteSourceCodeAccess' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'GitSourceControl' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'KDevelopSourceCodeAccess' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'NullSourceCodeAccess' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'PerforceSourceControl' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'PlasticSourceControl' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'RiderSourceCodeAccess' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'SubversionSourceControl' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'UObjectPlugin' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'VisualStudioCodeSourceCodeAccess' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'VisualStudioSourceCodeAccess' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'XCodeSourceCodeAccess' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'AssetManagerEditor' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'CryptoKeys' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'CurveEditorTools' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'DataValidation' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'FacialAnimation' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'GameplayTagsEditor' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'GeometryMode' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'MacGraphicsSwitching' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'MaterialAnalyzer' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'MobileLauncherProfileWizard' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'PluginBrowser' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'SpeedTreeImporter' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'DatasmithContent' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'VariantManagerContent' (referenced via default plugins -> DatasmithContent.uplugin) UEBuildTarget.AddPlugin: Enabling plugin 'AlembicImporter' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'GeometryCache' (referenced via default plugins -> AlembicImporter.uplugin) UEBuildTarget.AddPlugin: Enabling plugin 'AutomationUtils' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'ScreenshotTools' (referenced via default plugins -> AutomationUtils.uplugin) UEBuildTarget.AddPlugin: Enabling plugin 'BackChannel' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'ChaosClothEditor' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'ChaosCloth' (referenced via default plugins -> ChaosClothEditor.uplugin) UEBuildTarget.AddPlugin: Enabling plugin 'ChaosEditor' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'PlanarCut' (referenced via default plugins -> ChaosEditor.uplugin) UEBuildTarget.AddPlugin: Enabling plugin 'GeometryProcessing' (referenced via default plugins -> ChaosEditor.uplugin -> PlanarCut.uplugin) UEBuildTarget.AddPlugin: Enabling plugin 'EditableMesh' (referenced via default plugins -> ChaosEditor.uplugin) UEBuildTarget.AddPlugin: Enabling plugin 'GeometryCollectionPlugin' (referenced via default plugins -> ChaosEditor.uplugin) UEBuildTarget.AddPlugin: Enabling plugin 'ProceduralMeshComponent' (referenced via default plugins -> ChaosEditor.uplugin -> GeometryCollectionPlugin.uplugin) UEBuildTarget.AddPlugin: Enabling plugin 'ChaosSolverPlugin' (referenced via default plugins -> ChaosEditor.uplugin -> GeometryCollectionPlugin.uplugin) UEBuildTarget.AddPlugin: Enabling plugin 'ChaosNiagara' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'Niagara' (referenced via default plugins -> ChaosNiagara.uplugin) UEBuildTarget.AddPlugin: Enabling plugin 'CharacterAI' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'PlatformCrypto' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'ProxyLODPlugin' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'SkeletalReduction' (referenced via default plugins) UEBuildTarget.AddPlugin: Ignoring plugin 'MagicLeapMedia' (referenced via default plugins) due to unsupported target platform. UEBuildTarget.AddPlugin: Enabling plugin 'MagicLeapPassableWorld' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'MagicLeap' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'MLSDK' (referenced via default plugins -> MagicLeap.uplugin) UEBuildTarget.AddPlugin: Enabling plugin 'MagicLeapLightEstimation' (referenced via default plugins -> MagicLeap.uplugin) UEBuildTarget.AddPlugin: Enabling plugin 'AndroidMedia' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'AvfMedia' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'ImgMedia' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'MediaCompositing' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'MediaPlayerEditor' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'WmfMedia' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'MeshPainting' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'TcpMessaging' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'UdpMessaging' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'ActorSequence' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'LevelSequenceEditor' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'MatineeToLevelSequence' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'TemplateSequence' (referenced via default plugins -> MatineeToLevelSequence.uplugin) UEBuildTarget.AddPlugin: Enabling plugin 'MovieRenderPipeline' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'EditorScriptingUtilities' (referenced via default plugins -> MovieRenderPipeline.uplugin) UEBuildTarget.AddPlugin: Enabling plugin 'NetcodeUnitTest' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'NUTUnrealEngine4' (referenced via default plugins) UEBuildTarget.AddPlugin: Ignoring plugin 'OnlineSubsystemGooglePlay' (referenced via default plugins) due to unsupported target platform. UEBuildTarget.AddPlugin: Ignoring plugin 'OnlineSubsystemIOS' (referenced via default plugins) due to unsupported target platform. UEBuildTarget.AddPlugin: Enabling plugin 'OnlineSubsystemNull' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'OnlineSubsystem' (referenced via default plugins -> OnlineSubsystemNull.uplugin) UEBuildTarget.AddPlugin: Enabling plugin 'OnlineSubsystemUtils' (referenced via default plugins -> OnlineSubsystemNull.uplugin) UEBuildTarget.AddPlugin: Enabling plugin 'LauncherChunkInstaller' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'AndroidDeviceProfileSelector' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'AndroidMoviePlayer' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'AndroidPermission' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'AppleImageUtils' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'AppleMoviePlayer' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'ArchVisCharacter' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'AssetTags' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'AudioCapture' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'CableComponent' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'CustomMeshComponent' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'ExampleDeviceProfileSelector' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'GoogleCloudMessaging' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'GooglePAD' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'IOSDeviceProfileSelector' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'LinuxDeviceProfileSelector' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'LocationServicesBPLibrary' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'MobilePatchingUtils' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'OculusVR' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'PhysXVehicles' (referenced via default plugins) UEBuildTarget.AddPlugin: Ignoring plugin 'PostSplashScreen' (referenced via default plugins) due to unsupported target platform. UEBuildTarget.AddPlugin: Enabling plugin 'RuntimePhysXCooking' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'SoundFields' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'SteamVR' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'Synthesis' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'WebMMoviePlayer' (referenced via default plugins) UEBuildTarget.AddPlugin: Enabling plugin 'WebMMedia' (referenced via default plugins -> WebMMoviePlayer.uplugin) UEBuildTarget.AddPlugin: Enabling plugin 'WindowsMoviePlayer' (referenced via default plugins) LuminSDKVersionHelper.HasAnySDK: *** Unable to determine MLSDK location *** VCToolChain..ctor: Compiler: Z:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\VC\Tools\MSVC\14.16.27023\bin\HostX64\x64\cl.exe VCToolChain..ctor: Linker: Z:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\VC\Tools\MSVC\14.16.27023\bin\HostX64\x64\link.exe VCToolChain..ctor: Library Manager: Z:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\VC\Tools\MSVC\14.16.27023\bin\HostX64\x64\lib.exe VCToolChain..ctor: Resource Compiler: C:\Program Files (x86)\Windows Kits\10\bin\x64\rc.exe ExternalExecution.AreGeneratedCodeFilesOutOfDate: UnrealHeaderTool needs to run because no generated code directory was found for module NativizedAssets ExternalExecution.ExecuteHeaderToolIfNecessary: Parsing headers for InMyHead ExternalExecution.ExecuteHeaderToolIfNecessary: Running UnrealHeaderTool "Z:\Zelijah Media Files\In My Head (feat. Hijo)\UE Assets\[Project]\InMyHead\InMyHead.uproject" "Z:\Zelijah Media Files\In My Head (feat. Hijo)\UE Assets\[Project]\InMyHead\Intermediate\Build\Win64\InMyHead\Development\InMyHead.uhtmanifest" -LogCmds="loginit warning, logexit warning, logdatabase error" -Unattended -WarningsAsErrors -abslog="C:\Users\Zelijah\AppData\Roaming\Unreal Engine\AutomationTool\Logs\Z+[GAMES]+Epic+Games+UE_4.25\UHT-InMyHead-Win64-Development.txt" -installed ExternalExecution.ExecuteHeaderToolIfNecessary: Reflection code generated for InMyHead in 7.4428985 seconds UEBuildTarget.GenerateManifest: Writing manifest to Z:\Zelijah Media Files\In My Head (feat. Hijo)\UE Assets\[Project]\InMyHead\Intermediate\Build\Manifest.xml HotReload.IsLiveCodingSessionActive: Checking for live coding mutex: Global\LiveCoding_Z++Zelijah Media Files+In My Head (feat. Hijo)+UE Assets+[Project]+InMyHead+Binaries+Win64+InMyHead.exe ActionGraph.IsActionOutdated: Module.NativizedAssets.4_of_11.cpp: Produced item "Module.NativizedAssets.4_of_11.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.9_of_11.cpp: Produced item "Module.NativizedAssets.9_of_11.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.gen.3_of_11.cpp: Produced item "Module.NativizedAssets.gen.3_of_11.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: SharedPCH.Engine.cpp: Produced item "SharedPCH.Engine.h.pch" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.gen.9_of_11.cpp: Produced item "Module.NativizedAssets.gen.9_of_11.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.gen.4_of_11.cpp: Produced item "Module.NativizedAssets.gen.4_of_11.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.gen.7_of_11.cpp: Produced item "Module.NativizedAssets.gen.7_of_11.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.gen.8_of_11.cpp: Produced item "Module.NativizedAssets.gen.8_of_11.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.gen.6_of_11.cpp: Produced item "Module.NativizedAssets.gen.6_of_11.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.8_of_11.cpp: Produced item "Module.NativizedAssets.8_of_11.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.7_of_11.cpp: Produced item "Module.NativizedAssets.7_of_11.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.1_of_11.cpp: Produced item "Module.NativizedAssets.1_of_11.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.5_of_11.cpp: Produced item "Module.NativizedAssets.5_of_11.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.2_of_11.cpp: Produced item "Module.NativizedAssets.2_of_11.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.gen.5_of_11.cpp: Produced item "Module.NativizedAssets.gen.5_of_11.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: InMyHead.target: Produced item "InMyHead.target" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.gen.1_of_11.cpp: Produced item "Module.NativizedAssets.gen.1_of_11.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.gen.2_of_11.cpp: Produced item "Module.NativizedAssets.gen.2_of_11.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.10_of_11.cpp: Produced item "Module.NativizedAssets.10_of_11.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: InMyHead.exe: Produced item "InMyHead.exe" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.gen.10_of_11.cpp: Produced item "Module.NativizedAssets.gen.10_of_11.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: Default.rc2: Produced item "Default.rc2.res" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.6_of_11.cpp: Produced item "Module.NativizedAssets.6_of_11.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.gen.11_of_11.cpp: Produced item "Module.NativizedAssets.gen.11_of_11.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.3_of_11.cpp: Produced item "Module.NativizedAssets.3_of_11.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: InMyHead.cpp: Produced item "InMyHead.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.11_of_11.cpp: Produced item "Module.NativizedAssets.11_of_11.cpp.obj" doesn't exist. ActionGraph.IsActionOutdated: SharedPCH.Core.cpp: Produced item "SharedPCH.Core.h.pch" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.4_of_11.cpp: Produced item "Module.NativizedAssets.4_of_11.cpp.txt" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.9_of_11.cpp: Produced item "Module.NativizedAssets.9_of_11.cpp.txt" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.gen.3_of_11.cpp: Produced item "Module.NativizedAssets.gen.3_of_11.cpp.txt" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.gen.8_of_11.cpp: Produced item "Module.NativizedAssets.gen.8_of_11.cpp.txt" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.gen.1_of_11.cpp: Produced item "Module.NativizedAssets.gen.1_of_11.cpp.txt" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.gen.2_of_11.cpp: Produced item "Module.NativizedAssets.gen.2_of_11.cpp.txt" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.gen.9_of_11.cpp: Produced item "Module.NativizedAssets.gen.9_of_11.cpp.txt" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.gen.7_of_11.cpp: Produced item "Module.NativizedAssets.gen.7_of_11.cpp.txt" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.10_of_11.cpp: Produced item "Module.NativizedAssets.10_of_11.cpp.txt" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.11_of_11.cpp: Produced item "Module.NativizedAssets.11_of_11.cpp.txt" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.gen.10_of_11.cpp: Produced item "Module.NativizedAssets.gen.10_of_11.cpp.txt" doesn't exist. ActionGraph.IsActionOutdated: SharedPCH.Engine.cpp: Produced item "SharedPCH.Engine.h.obj" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.2_of_11.cpp: Produced item "Module.NativizedAssets.2_of_11.cpp.txt" doesn't exist. ActionGraph.IsActionOutdated: SharedPCH.Core.cpp: Produced item "SharedPCH.Core.h.obj" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.gen.6_of_11.cpp: Produced item "Module.NativizedAssets.gen.6_of_11.cpp.txt" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.8_of_11.cpp: Produced item "Module.NativizedAssets.8_of_11.cpp.txt" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.gen.5_of_11.cpp: Produced item "Module.NativizedAssets.gen.5_of_11.cpp.txt" doesn't exist. ActionGraph.IsActionOutdated: SharedPCH.Engine.cpp: Produced item "SharedPCH.Engine.h.txt" doesn't exist. ActionGraph.IsActionOutdated: InMyHead.exe: Produced item "InMyHead.pdb" was produced by outdated command-line. ActionGraph.IsActionOutdated: New command-line: Z:\[GAMES]\Epic Games\UE_4.25\Engine\Build\Windows\link-filter\link-filter.exe -- "Z:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\VC\Tools\MSVC\14.16.27023\bin\HostX64\x64\link.exe" @"Z:\Zelijah Media Files\In My Head (feat. Hijo)\UE Assets\[Project]\InMyHead\Intermediate\Build\Win64\InMyHead\Development\InMyHead.exe.response" ActionGraph.IsActionOutdated: SharedPCH.Core.cpp: Produced item "SharedPCH.Core.h.txt" doesn't exist. ActionGraph.IsActionOutdated: InMyHead.cpp: Produced item "InMyHead.cpp.txt" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.6_of_11.cpp: Produced item "Module.NativizedAssets.6_of_11.cpp.txt" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.gen.11_of_11.cpp: Produced item "Module.NativizedAssets.gen.11_of_11.cpp.txt" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.gen.4_of_11.cpp: Produced item "Module.NativizedAssets.gen.4_of_11.cpp.txt" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.7_of_11.cpp: Produced item "Module.NativizedAssets.7_of_11.cpp.txt" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.3_of_11.cpp: Produced item "Module.NativizedAssets.3_of_11.cpp.txt" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.5_of_11.cpp: Produced item "Module.NativizedAssets.5_of_11.cpp.txt" doesn't exist. ActionGraph.IsActionOutdated: Module.NativizedAssets.1_of_11.cpp: Produced item "Module.NativizedAssets.1_of_11.cpp.txt" doesn't exist. ActionGraph.DeleteOutdatedProducedItems: Deleting outdated item: Z:\Zelijah Media Files\In My Head (feat. Hijo)\UE Assets\[Project]\InMyHead\Binaries\Win64\InMyHead.pdb BuildMode.Build: Building InMyHead... BuildMode.OutputToolchainInfo: Using Visual Studio 2017 14.16.27040 toolchain (Z:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\VC\Tools\MSVC\14.16.27023) and Windows 10.0.14393.0 SDK (C:\Program Files (x86)\Windows Kits\10). BuildMode.OutputToolchainInfo: [Upgrade] BuildMode.OutputToolchainInfo: [Upgrade] Using backward-compatible build settings. The latest version of UE4 sets the following values by default, which may require code changes: BuildMode.OutputToolchainInfo: [Upgrade] bLegacyPublicIncludePaths = false => Omits subfolders from public include paths to reduce compiler command line length. (Previously: true). BuildMode.OutputToolchainInfo: [Upgrade] ShadowVariableWarningLevel = WarningLevel.Error => Treats shadowed variable warnings as errors. (Previously: WarningLevel.Warning). BuildMode.OutputToolchainInfo: [Upgrade] PCHUsage = PCHUsageMode.UseExplicitOrSharedPCHs => Set in build.cs files to enables IWYU-style PCH model. See https://docs.unrealengine.com/en-US/Programming/BuildTools/UnrealBuildTool/IWYU/index.html. (Previously: PCHUsageMode.UseSharedPCHs). BuildMode.OutputToolchainInfo: [Upgrade] Suppress this message by setting 'DefaultBuildSettings = BuildSettingsVersion.V2;' in InMyHead.Target.cs, and explicitly overriding settings that differ from the new defaults. BuildMode.OutputToolchainInfo: [Upgrade] ParallelExecutor.ExecuteActions: Building 28 actions with 16 processes... ParallelExecutor.ExecuteActions: [1/28] Default.rc2 ParallelExecutor.ExecuteActions: [2/28] SharedPCH.Core.cpp ParallelExecutor.ExecuteActions: [3/28] InMyHead.cpp ParallelExecutor.ExecuteActions: [4/28] SharedPCH.Engine.cpp ParallelExecutor.ExecuteActions: [5/28] Module.NativizedAssets.gen.5_of_11.cpp ParallelExecutor.ExecuteActions: [6/28] Module.NativizedAssets.gen.4_of_11.cpp ParallelExecutor.ExecuteActions: [7/28] Module.NativizedAssets.gen.2_of_11.cpp ParallelExecutor.ExecuteActions: [8/28] Module.NativizedAssets.gen.3_of_11.cpp ParallelExecutor.ExecuteActions: [9/28] Module.NativizedAssets.gen.6_of_11.cpp ParallelExecutor.ExecuteActions: [10/28] Module.NativizedAssets.gen.7_of_11.cpp ParallelExecutor.ExecuteActions: [11/28] Module.NativizedAssets.gen.8_of_11.cpp ParallelExecutor.ExecuteActions: [12/28] Module.NativizedAssets.gen.9_of_11.cpp ParallelExecutor.ExecuteActions: [13/28] Module.NativizedAssets.gen.1_of_11.cpp ParallelExecutor.ExecuteActions: [14/28] Module.NativizedAssets.gen.10_of_11.cpp ParallelExecutor.ExecuteActions: [15/28] Module.NativizedAssets.gen.11_of_11.cpp ParallelExecutor.ExecuteActions: [16/28] Module.NativizedAssets.10_of_11.cpp ParallelExecutor.ExecuteActions: Z:/Zelijah Media Files/In My Head (feat. Hijo)/UE Assets/[Project]/InMyHead/Intermediate/Plugins/NativizedAssets/Windows/Game/Source/NativizedAssets/Private/W_PlayerHUD__pf655287937.cpp(571): error C2065: 'UW_Minimap_C__pf655287937': undeclared identifier ParallelExecutor.ExecuteActions: Z:/Zelijah Media Files/In My Head (feat. Hijo)/UE Assets/[Project]/InMyHead/Intermediate/Plugins/NativizedAssets/Windows/Game/Source/NativizedAssets/Private/W_PlayerHUD__pf655287937.cpp(571): error C2672: 'NewObject': no matching overloaded function found ParallelExecutor.ExecuteActions: Z:/Zelijah Media Files/In My Head (feat. Hijo)/UE Assets/[Project]/InMyHead/Intermediate/Plugins/NativizedAssets/Windows/Game/Source/NativizedAssets/Private/W_PlayerHUD__pf655287937.cpp(571): error C2974: 'NewObject': invalid template argument for 'T', type expected ParallelExecutor.ExecuteActions: Z:\[GAMES]\Epic Games\UE_4.25\Engine\Source\Runtime\CoreUObject\Public\UObject/UObjectGlobals.h(1225): note: see declaration of 'NewObject' ParallelExecutor.ExecuteActions: Z:/Zelijah Media Files/In My Head (feat. Hijo)/UE Assets/[Project]/InMyHead/Intermediate/Plugins/NativizedAssets/Windows/Game/Source/NativizedAssets/Private/W_PlayerHUD__pf655287937.cpp(572): error C3536: '__Local__8': cannot be used before it is initialized ParallelExecutor.ExecuteActions: Z:/Zelijah Media Files/In My Head (feat. Hijo)/UE Assets/[Project]/InMyHead/Intermediate/Plugins/NativizedAssets/Windows/Game/Source/NativizedAssets/Private/W_PlayerHUD__pf655287937.cpp(572): error C2440: '': cannot convert from 'int' to 'FUnconvertedWrapper__UW_Minimap_C__pf655287937' ParallelExecutor.ExecuteActions: Z:/Zelijah Media Files/In My Head (feat. Hijo)/UE Assets/[Project]/InMyHead/Intermediate/Plugins/NativizedAssets/Windows/Game/Source/NativizedAssets/Private/W_PlayerHUD__pf655287937.cpp(572): note: No constructor could take the source type, or constructor overload resolution was ambiguous ParallelExecutor.ExecuteActions: Z:/Zelijah Media Files/In My Head (feat. Hijo)/UE Assets/[Project]/InMyHead/Intermediate/Plugins/NativizedAssets/Windows/Game/Source/NativizedAssets/Private/W_PlayerHUD__pf655287937.cpp(575): error C2440: '=': cannot convert from 'int' to 'UWidget *' ParallelExecutor.ExecuteActions: Z:/Zelijah Media Files/In My Head (feat. Hijo)/UE Assets/[Project]/InMyHead/Intermediate/Plugins/NativizedAssets/Windows/Game/Source/NativizedAssets/Private/W_PlayerHUD__pf655287937.cpp(575): note: Conversion from integral type to pointer type requires reinterpret_cast, C-style cast or function-style cast ParallelExecutor.ExecuteActions: [17/28] Module.NativizedAssets.5_of_11.cpp ParallelExecutor.ExecuteActions: [18/28] Module.NativizedAssets.2_of_11.cpp ParallelExecutor.ExecuteActions: [19/28] Module.NativizedAssets.8_of_11.cpp ParallelExecutor.ExecuteActions: [20/28] Module.NativizedAssets.7_of_11.cpp ParallelExecutor.ExecuteActions: [21/28] Module.NativizedAssets.6_of_11.cpp ParallelExecutor.ExecuteActions: [22/28] Module.NativizedAssets.3_of_11.cpp ParallelExecutor.ExecuteActions: [23/28] Module.NativizedAssets.11_of_11.cpp ParallelExecutor.ExecuteActions: [24/28] Module.NativizedAssets.1_of_11.cpp ParallelExecutor.ExecuteActions: [25/28] Module.NativizedAssets.4_of_11.cpp ParallelExecutor.ExecuteActions: [26/28] Module.NativizedAssets.9_of_11.cpp UnrealBuildTool.Main: CompilationResultException: Error: OtherCompilationError UnrealBuildTool.Main: at UnrealBuildTool.ActionGraph.ExecuteActions(BuildConfiguration BuildConfiguration, List`1 ActionsToExecute) in D:\Build\++UE4+Licensee\Sync\Engine\Saved\CsTools\Engine\Source\Programs\UnrealBuildTool\System\ActionGraph.cs:line 242 UnrealBuildTool.Main: at UnrealBuildTool.BuildMode.Build(List`1 TargetDescriptors, BuildConfiguration BuildConfiguration, ISourceFileWorkingSet WorkingSet, BuildOptions Options, FileReference WriteOutdatedActionsFile) in D:\Build\++UE4+Licensee\Sync\Engine\Saved\CsTools\Engine\Source\Programs\UnrealBuildTool\Modes\BuildMode.cs:line 372 UnrealBuildTool.Main: at UnrealBuildTool.BuildMode.Execute(CommandLineArguments Arguments) in D:\Build\++UE4+Licensee\Sync\Engine\Saved\CsTools\Engine\Source\Programs\UnrealBuildTool\Modes\BuildMode.cs:line 219 UnrealBuildTool.Main: at UnrealBuildTool.UnrealBuildTool.Main(String[] ArgumentsArray) in D:\Build\++UE4+Licensee\Sync\Engine\Saved\CsTools\Engine\Source\Programs\UnrealBuildTool\UnrealBuildTool.cs:line 550 Timeline.Print: Timeline: Timeline.Print: Timeline.Print: [ 0.000] Timeline.Print: [ 0.000](+0.053) Timeline.Print: [ 0.053](+0.002) FileMetadataPrefetch.QueueEngineDirectory() Timeline.Print: [ 0.055](+0.229) XmlConfig.ReadConfigFiles() Timeline.Print: [ 0.285](+0.000) SingleInstanceMutex.Acquire() Timeline.Print: [ 0.285](+0.125) UEBuildPlatform.RegisterPlatforms() Timeline.Print: 0.286 [ 0.000](+0.090) Initializing InstalledPlatformInfo Timeline.Print: 0.377 [ 0.091](+0.000) Querying types Timeline.Print: 0.378 [ 0.093](+0.001) MacPlatformFactory Timeline.Print: 0.380 [ 0.094](+0.000) TVOSPlatformFactory Timeline.Print: 0.380 [ 0.095](+0.022) AndroidPlatformFactory Timeline.Print: 0.403 [ 0.117](+0.000) HoloLensPlatformFactory Timeline.Print: 0.403 [ 0.117](+0.002) IOSPlatformFactory Timeline.Print: 0.405 [ 0.120](+0.004) LinuxPlatformFactory Timeline.Print: 0.410 [ 0.124](+0.000) LuminPlatformFactory Timeline.Print: 0.410 [ 0.124](+0.000) WindowsPlatformFactory Timeline.Print: [ 0.418](+0.015) TargetDescriptor.ParseCommandLine() Timeline.Print: [ 0.448](+4.949) UEBuildTarget.Create() Timeline.Print: 0.453 [ 0.004](+3.175) RulesCompiler.CreateTargetRulesAssembly() Timeline.Print: 0.453 0.004 [ 0.000](+2.372) Timeline.Print: 2.825 2.377 [ 2.372](+0.032) Finding engine modules Timeline.Print: 2.858 2.409 [ 2.405](+0.005) Finding plugin modules Timeline.Print: 2.863 2.415 [ 2.410](+0.115) Timeline.Print: 2.978 2.530 [ 2.525](+0.004) Finding program modules Timeline.Print: 2.983 2.534 [ 2.530](+0.002) Finding program targets Timeline.Print: 2.985 2.536 [ 2.532](+0.040) Timeline.Print: 3.026 2.577 [ 2.572](+0.597) Compiling rules assembly (InMyHeadModuleRules.dll) Timeline.Print: 3.628 [ 3.180](+0.360) RulesAssembly.CreateTargetRules() Timeline.Print: 3.989 [ 3.540](+0.079) Timeline.Print: 4.068 [ 3.620](+0.227) Compiling rules assembly (NativizedAssetsModuleRules.dll) Timeline.Print: 4.296 [ 3.848](+0.015) UEBuildTarget constructor Timeline.Print: 4.312 [ 3.864](+1.085) UEBuildTarget.PreBuildSetup() Timeline.Print: [ 5.401](+32.503) UEBuildTarget.Build() Timeline.Print: 5.401 [ 0.000](+0.109) Timeline.Print: 5.511 [ 0.109](+22.197) ExternalExecution.SetupUObjectModules() Timeline.Print: 27.708 [22.307](+0.195) Timeline.Print: 27.903 [22.502](+7.442) Executing UnrealHeaderTool Timeline.Print: 35.347 [29.946](+0.002) ExternalExecution.ResetCachedHeaderInfo() Timeline.Print: 35.349 [29.948](+0.002) ExternalExecution.UpdateDirectoryTimestamps() Timeline.Print: 35.352 [29.950](+0.035) Timeline.Print: 35.387 [29.986](+2.375) UEBuildBinary.Build() Timeline.Print: 37.762 [32.361](+0.141) Timeline.Print: [37.905](+0.002) ActionGraph.CheckPathLengths Timeline.Print: [37.908](+0.020) Timeline.Print: [37.928](+0.001) Reading dependency cache Timeline.Print: [37.930](+0.001) Reading action history Timeline.Print: [37.932](+0.022) ActionGraph.GetActionsToExecute() Timeline.Print: 37.934 [ 0.001](+0.001) Prefetching include dependencies Timeline.Print: 37.936 [ 0.003](+0.017) Cache outdated actions Timeline.Print: [37.955](+0.049) Timeline.Print: [38.004](+145.515) ActionGraph.ExecuteActions() Timeline.Print: [183.519](+0.203) Timeline.Print: [183.722](+0.000) FileMetadataPrefetch.Stop() Timeline.Print: [183.725]

I also tried doing it with a blank project in Unreal and it gives me this message:

Missing UE4Game binary.
You may have to build the UE4 project with your IDE. Alternatively, build using UnrealBuildTool with the commandline:
UE4Game

I decided to post it here since nobody seems to figure out what and where my problem is coming from either from Visual Studio or Unreal itself.... please help me... :(
submitted by jedlsf to gamedev [link] [comments]

Python Data Structures #5: Binary Search Tree (BST) - YouTube Forms in html( Create form in html) C++ Qt 85 - Binary IO basic object serialization - YouTube Login system using PHP with MYSQL database - YouTube Php : How To Search And Filter Data In Html Table Using ... Scraping web page data of options in select object with ... HTML5 + Javascript Date Time Form Input Types Tutorial ... Javascript Form Select Change Options Tutorial Dynamic ... HTML Tutorial 77 - HTML iframe tag  HTML object tag ... How to connect HTML Register Form to MySQL Database with ...

The DOM Datalist Object is used to represent the HTML <Datalist> element. The Datalist element is accessed by getElementById().. Properties: It has a ‘Option’ attribute which is used to return the collection of all options value in a datalist. Syntax: document.getElementById("gfg"); Where “gfg” is the ID assigned to the “Datalist” Tag. Specifies the media type of data specified in the data attribute: typemustmatch: true/false: Specifies whether the type attribute and the actual content of the resource must match to be displayed: usemap: #mapname: Specifies the name of a client-side image map to be used with the object: width: pixels: Specifies the width of the object Packing and unpacking requires a string that defines how the binary data is structured. It needs to know which bytes represent values. It needs to know whether the entire set of bytes represets characters or if it is a sequence of 4-byte integers. It can be structured in any number of ways. The format strings can be simple or complex. In this example I am packing a single four-byte integer ... HTML (DOM) sourced data The foundation for DataTables is progressive enhancement, so it is very adept at reading table information directly from the DOM. This example shows how easy it is to add searching, ordering and paging to your HTML table by simply running DataTables on it. Angular2 Response Options With Binary Data. Json The response is a JavaScript object created by parsing the contents of received Jul 26, …. Jul 03, 2017 · Upload Multiple files in angular js with collecting binary data of each of them R(In order to Enlarge images Please Right Click and Open images in New Tab). Angular URLSearchParams class is used to create URL parameters To use angular material select, use <mat-select> formControl for selecting a value from the set of options response.json() – parse the response as angular2 response options with binary data JSON object, response.formData() – return the response as FormData object (form/multipart encoding, see the next chapter), response.blob ... Teams. Q&A for Work. Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. DataTables will automatically detect four different attributes on the HTML elements: data-sort or data-order - for ordering data; data-filter or data-search - for search data; This example shows the use of data-sort and data-filter attributes. In this case the first column has been formatted so the first name has abbreviated, but the full name ... class BinaryView implements a view on binary data, and presents a queryable interface of a binary file. One key job of BinaryView is file format parsing which allows Binary Ninja to read, write, insert, remove portions of the file given a virtual address. For the purposes of this documentation we define a virtual address as the memory address that the various pieces of the physical file will ... Sending binary data. The send method of the XMLHttpRequest has been extended to enable easy transmission of binary data by accepting an ArrayBuffer, Blob, or File object. The following example creates a text file on-the-fly and uses the POST method to send the "file" to the server. This example uses plain text, but you can imagine the data ...

[index] [26289] [5112] [23638] [3436] [2319] [23208] [22577] [14257] [29263] [25657]

Python Data Structures #5: Binary Search Tree (BST) - YouTube

Tutorial Script: http://www.developphp.com/video/HTML/Date-Time-Form-Input-Type-Tutorial Learn to program HTML5 Date and Time Form input attributes and tie t... How to scrape web page data of options in a drop down list in select object with VBA. We learn interesting things like determining the number of values in a ... HTML Tutorial 77 - HTML iframe tag HTML object tag HTML embed tag HTML embed tag , HTML object tag and HTML iframe tag: When to use embed , object and if... Lesson Code: http://www.developphp.com/video/JavaScript/Form-Select-Change-Dynamic-List-Option-Elements-Tutorial In this Javascript video lesson you will lea... UPDATE: User registration with email verification on localhost: https://goo.gl/nRADcM Get professionals to build your PHP projects for you: https://www.a3log... How To Find Data In MySQL Database And Display It In Html Table Using Php Source Code: http://1bestcsharp.blogspot.com/2015/10/php-html-table-search-filter-d... QML Beginners: https://www.udemy.com/course/qml-for-beginners/?referralCode=3B69B9927B587BBF40F1 Qt Core Beginners: https://www.udemy.com/course/qt-core-for-... Code below... In this video we'll begin by discussing the basics of the Binary Search Tree data structure, and towards the end, we'll move over to a coding e... Form in html( Create form in html) Java Project Tutorial - Make Login and Register Form Step by Step Using NetBeans And MySQL Database - Duration: 3:43:32. 1BestCsharp blog 11,425,702 views If you don't know, How to store data from HTML Register Form to your MySQL Database using PHP then watch this 6 minutes video. Requirement: You should know h...

http://binary-optiontrade.pastydepnieci.tk