|
44507493a3
|
deleted old recordings, updated gpu for training,
|
2025-05-22 15:57:20 -04:00 |
|
liyuxiao2
|
c2a71605f8
|
fixed config issue
|
2025-05-22 14:41:07 -04:00 |
|
liyuxiao2
|
85d6afd976
|
disabled gpu for now
|
2025-05-22 14:33:08 -04:00 |
|
liyuxiao2
|
310cc10f71
|
tweaked basic path
|
2025-05-22 14:28:21 -04:00 |
|
liyuxiao2
|
5970af5be7
|
fixed paths
|
2025-05-22 14:26:15 -04:00 |
|
liyuxiao2
|
ff3d45653d
|
formatted all files
|
2025-05-22 14:12:36 -04:00 |
|
liyuxiao2
|
ba796961a3
|
reorganized file structure
|
2025-05-22 14:11:18 -04:00 |
|
liyuxiao2
|
b943614677
|
tweaked path so that uploading dataset artifacts now work
|
2025-05-21 15:56:55 -04:00 |
|
liyuxiao2
|
a38670cf59
|
updated workflow upload path
|
2025-05-21 15:54:36 -04:00 |
|
liyuxiao2
|
2b2766524c
|
removed data/ from the git ignore
|
2025-05-21 15:52:16 -04:00 |
|
|
743f88f1ef
|
deleted datasets, to test if the workflow is actually making the datasets
|
2025-05-16 11:40:43 -04:00 |
|
|
ad2ef9509f
|
Added the split datagen script
|
2025-05-16 11:26:33 -04:00 |
|
|
5e5a320bc5
|
updated workflow file, deleted initial dataset file created
|
2025-05-15 10:55:36 -04:00 |
|
|
12b920c88d
|
LFS'D the recordings folder, added in a new file called dataset_gen.py, which generates a .h5 dataset file under the data folder, modified the worflow file to run the script on every push/PR and uploades the data set as a workflow artifcat
|
2025-05-15 10:47:54 -04:00 |
|