Computer Impulse Buy / AI Ganbang / Soaking Discussion

Haha i remember when i c him in Antwerp there wuz like an entire page in da prog devoted to his Campy etude. And he didn’t play tha mofo :comme:

1 Like

Here the real program from another braodcast. A peak doc passcaglia around time he prepping it proper. I traded VHS tapes for the broadcast back in the day. (no joke)

  • Franz Schubert – Piano Sonata in B-flat major, D. 959
  • Leopold Godowsky – Passacaglia (44 Variations, Cadenza and Fugue on a Theme by Schubert)
  • Sergei Rachmaninoff – Piano Sonata No. 2 in B-flat minor, Op. 36 epe.lac-bac.gc.ca

Somewhere is lost foogate of liszt pags and a fit passacaglia. Damn!

1 Like

yep

There aint no Godowsky at all here…

1 Like

Yeah my point which If didn’t make even a little was there lost godowsky passacag in some raw footage almost def lost.

1 Like

I looked a couple years ago and ds basement vid BREW poztd still up. Dasdc search fucktion iz a mess tho so gud luck to find dat.

Lol a quick search through some threads help me appreciate just how many tube sacred relics are from people trading vids here decades ago.

This done now I eat cheeseburgers to celebrate. It kind of dark but color grading setting levels annoying.

1 Like

Awesome! Now use da super comp to find da passacaglia!

1 Like

Is trapped inside SDC past!

1 Like

I failed so far, but here iz a 1991 DOC Norma which make it aless fail maybe.

Tru, diz a thing i couldnt find on da tube

Christ on a cracker, da Kancerto jere too. And zero schubert . My favorite chopgoz az well. This man used to play really gud!

Anything tubed or early compressed MPEG dun go. I just tried to do BOOZONI concert and the looked like the coocoon aliens visiting a roblox world.

1 Like

Fo sho that why i reached out to that blackest of niggers. I hope he responds!

1 Like

Yo

1 Like

K-nar trying to run this shit today pray for me involves snake charming magic (python)

Uhhh this is horrible. I dun want to learn things and develop skills to use this sheyat.

@k-nar it says I have to revise the build nightly until wheels ship for my stuff. Wut?

Should i just wait or build from source very day I want to use this turd - if I can even get it going?

i dont want to bother with cpu version

1 Like

where?

What you’ve done so far

  1. Cloned DLoRAL and created a dloral conda env.

  2. Installed the repo’s Python requirements.

  3. Downloaded all five model files (SD-2.1, BERT, RAM, DAPE, DLoRAL LoRA).

  4. Tried three different dependency stacks

    Attempt Torch / TV pair MMCV / xFormers Result
    CUDA 11.8 wheels (2.0.1) 2.0.1 +cu118 cu118 wheels Won’t load on RTX 5090 (sm_120 unsupported)
    Nightly Torch 2.7.1 + cu121 2.7.1 (2025-xx-xx) mmcv/xFormers compiled for 2.0.1 ⇒ DLL errors
    CPU-only 2.0.1 2.0.1 +cpu mmcv-full (cpu), NumPy 1.26 Imports OK, but DLoRAL calls .cuda() and crashes
  5. Pinned NumPy to 1.26 and installed missing libs (mmcv-full, mmengine).

  6. Current state of your env:

    torch          2.0.1+cpu
    torchvision    0.15.2+cpu
    mmcv-full      1.7.2
    mmengine       installed
    numpy          1.26.4
    

Why it still fails

  • CPU PyTorch ⇒ .cuda() calls in DLoRAL raise AssertionError: Torch not compiled with CUDA enabled.
  • When you tried GPU builds, the binary extensions (MMCV, TorchVision, xFormers) weren’t built against the same Torch version/CUDA tool-chain, so they refused to load.
  • Your RTX 5090 (compute 12.0) needs a newer CUDA 12.x wheel that matches all extensions, but pre-built wheels for that combo aren’t widely available yet.

The decision point

  1. Run on CPU right now (quickest):

    • Patch DLoRAL code: replace every .cuda() with .to(torch.device('cpu')) (or a DEVICE variable).
    • Keep current CPU Torch/TV/MMCV stack.
    • Works, but slow.
  2. Run on GPU (fast, but heavier setup):

    • Install a matching nightly Torch + TorchVision pair built on the same date with CUDA 12.1.
    • Re-build or install MMCV-full and xFormers for that exact Torch.
    • No code changes needed once the binaries line up.

Choose CPU for immediate functionality; choose GPU if you’re willing to compile or wait for official CUDA 12 wheels that support sm_120.

Also I don’t know what any of this means but I feel like I got far. :sunglasses:

type nvidia-smi and paste output

2 Likes