I tried to connect to the computing instance via ssh, but it didn’t help much, because:
>As far as I remember, I was able to see all the code on the computing instance. It no longer exists, the main The folder only has some hidden files. I don’t know where to look for the actual project files.
>The only error I managed to get from the log file is: Error sync pod 9c8e56bc-4298-11e6-ab50, skip: use CrashLoopBackOff “StartContainer” failed for “postgres”: “Back-off 5m0s restarting failed container = postgres pod = postgres_default(9c8e56bc-4298-11e6-ab50); This makes me think that Postgres has some problems, it has its own permanent disk, but it seems There is no easy way to find out how many disks are occupied.
>Even if I am the administrator of the project, I receive a detailed (with stack trace) email every time an error occurs, but I do not receive it at all To any content.
This behavior started today, and suddenly, I haven’t touched this project for nearly two years, so I am completely lost.
Thank you.
How can I check the remaining size of a persistent disk on Google
Cloud ?
For this part, I finally found a way to do this today. I will describe it with a print screen here, so it’s easy for anyone. < /p>
>First, go to the Google console, disk page: https://console.cloud.google.com/compute/disks
>Identify the permanent disk you are interested in. In my example , This is called pg-data-disk. Click the corresponding VM instance; this will be in the “User” column, as shown below:
>This will open the SSH connection to the VM instance connected to the permanent disk. In the SSH window, run the following command: sudo lsblk. The result should be as follows As shown in the picture:
>You will therefore find the DISK ID (in In my case this is sdb), so you can now run: sudo df -h
As for the rest of the question, what I actually used was carefully planned by Kubernetes Docker container. I completely forgot about it.
I will upgrade my RAM and start working again.
Thank you.
I created a project on Google Cloud a long time ago and currently I am experiencing some problems. The only result I seem to receive is an internal server error.
I try to connect to the computing via ssh Instance, but it doesn’t help much, because:
>As far as I remember, I used to be able to see all the code on the calculation instance. It no longer exists, and the main folder only has some hidden files. I don’t know. Where to look for the actual project file.
>The only error I managed to get from the log file is: Error sync pod 9c8e56bc-4298-11e6-ab50, skip: Use CrashLoopBackOff for “postgres” failed “StartContainer”: ” Back-off 5m0s restarting failed container = postgres pod = postgres_default(9c8e56bc-4298-11e6-ab50); This makes me think that Postgres has some problems, it has its own permanent disk, but there seems to be no easy way to find out how much it is occupied Disk.
>Even if I should The administrator of the project, I will receive a detailed (with stack trace) email every time an error occurs, but I have not received any content at all.
This behavior began today, and suddenly In the meantime, I haven’t touched this project in the past two years, so I am completely lost.
Thank you.
How can I check the remaining size of a persistent disk on Google
Cloud?
For this part, I finally found a way to do it today. I will be here Use a print screen to describe it, so it’s easy for anyone.
>First, go to the Google console, disk page: https://console.cloud.google.com/compute/ disks
>Identifies the permanent disk you are interested in. In my example, this is called pg-data-disk. Click on the corresponding VM instance; this will be in the “User” column, as shown in the image below:
>This will open SSH to the VM instance connected to the persistent disk Connect. In the SSH window, run the following command: sudo lsblk. The result should look like the following figure:
>You will therefore find the DISK ID (in my case this is sdb), so you can now run: sudo df -h
As for the other part of the question, I am actually using a Docker container orchestrated by Kubernetes. I completely forgot about it.
My RAM will be upgraded And start working again.
Thank you.