Strange dead picroft

Picroft seems to be dead, what caused it ?

------------ output from booting --------
sleeping
remote: Enumerating objects: 150, done.
remote: Counting objects: 100% (150/150), done.
remote: Total 204 (delta 150), reused 150 (delta 150), pack-reused 54
Receiving objects: 100% (204/204), 49.83 KiB | 0 bytes/s, done.
error: inflate: data stream error (incorrect header check)
fatal: loose object d8460345cbf60cb55b36d792e34c0296e9596f94 (stored in .git/objects/d8/460345cbf60cb55b36d792e34c0296e9596f94) is corrupt
fatal: index-pack failed
Initializing…
Starting background service bus
CAUTION: The Mycroft bus is an open websocket with no built-in security
measures. You are responsible for protecting the local port
8181 with a firewall as appropriate.
Starting background service skills
Starting cli


output from cli:

Establishing Mycroft Messagebus connection…
Connected to Messagebus!
07:26:07.742 | INFO | 834 | mycroft.client.speech.listener:create_wakeup_recognizer:322 | creating stand up word engine
07:26:07.746 | INFO | 834 | mycroft.client.speech.hotword_factory:load_module:311 | Loading “wake up” wake word via pocketsphinx
07:26:09.079 | INFO | 834 | mycroft.messagebus.client.client:on_open:67 | Connected
07:26:10.173 | INFO | 777 | msm.mycroft_skills_manager | building SkillEntry objects for all skills
07:26:12.958 | INFO | 777 | main:_initialize_skill_manager:251 | MSM is uninitialized and requires network connection to fetch skill information
Will retry after internet connection is established.
07:26:13.162 | INFO | 777 | mycroft.skills.msm_wrapper:create_msm:90 | Acquiring lock to instantiate MSM
07:26:13.570 | INFO | 777 | msm.mycroft_skills_manager | building SkillEntry objects for all skills
07:26:14.842 | INFO | 777 | main:_initialize_skill_manager:251 | MSM is uninitialized and requires network connection to fetch skill information
Will retry after internet connection is established.
07:26:14.848 | INFO | 777 | main:_get_pairing_status:92 | Device is paired
07:26:14.851 | INFO | 777 | main:_update_system_clock:104 | Updating the system clock via NTP…
07:26:14.864 | INFO | 777 | mycroft.enclosure.display_manager:_write_data:70 | Display Manager is creating /ramdisk/mycroft/ipc/managers/disp_info
07:26:29.901 | INFO | 777 | main:_update_device_attributes_on_backend:144 | Sending updated device attributes to the backend…
^— NEWEST —^

Hi StrK,

It looks like some of git’s objects became corrupted. This most often happens if an action is interrupted mid-write eg a power failure.

Here’s a Stackoverflow question that seems quite relevant:

Basically the answer is to delete the loose objects, and they should be recreated with the next pull.

cd /home/pi/mycroft-core/
rm -f .git/objects/d8/460345cbf60cb55b36d792e34c0296e9596f94

Then to check if there are other corrupted objects:

git fsck --full

Once you get the all clear from fsck I’d reboot the Picroft which will attempt an update and make sure you aren’t getting the same messages.

sill the same, but its the progress:

after logging:

remote: Enumerating objects: 233, done.
remote: Counting objects: 100% (233/233), done.
remote: Compressing objects: 100% (28/28), done.
remote: Total 295 (delta 209), reused 217 (delta 205), pack-reused 62
Receiving objects: 100% (295/295), 67.08 KiB | 0 bytes/s, done.
Resolving deltas: 99% (214/216), completed with 61 local objects.
fatal: pack has 2 unresolved deltas
fatal: index-pack failed
Initializing…

Too bad i dont know what pack is a failed state, how to detect that ? and how to proceed ?

Okay, maybe not clean solution but worked in that case:

I moved whole mycroft-core dir to ~/mycroft-core-temp

I rebuild whole mycroft-core dir in my home dir:

  • cd ~/
  • git clone https://github.com/MycroftAI/mycroft-core.git
  • cd mycroft-core

and copied missing content from *temp to rebuilded dir:

dirs:
.venv
mimic

then:
mycroft-stop
sudo rm -rf /opt/mycroft/.skills-repo
msm update

and finally, check sd card corruption (yes there was)
sudo touch /forcefsck
sudo reboot

Ah card corruption makes sense.

Thanks for the update, it always helps future people looking through the Forums with a similar issue.