I think I must have just spent more time (5 mins) looking at this repo trying to understand why you posted it, than you spent actually coding this.
I don't want to put you off, but there's no substance at all here, I'd have assumed Claude wrote it based on the fact you've vendored in rules, but the code is so questionable, even an LLM from 2022 would do better. E.g. 'flattenData' from utils could just be [1] rather than a BFS, though I don't really get why your public API allows TensorData to be a single integer in the first place, 50% of your logic is to work around that.
But rant over. My point is, maybe post this when you've built even 5% of PyTorch, or learnt something of value, or have something tangible to impart upon us, rather than a library of ill-thought-out array utils.
2. It’s clear you didn’t read the rules folder because it’s a rule to tell cursor to teach me
3. You say you don’t understand why TensorData can be a single integer. Scalars are tensors too. I’m trying to support PyTorch’s spec and ops as much as possible which is why I’m supporting it. The obvious use cases are for reduction ops and scalar unary ops.
The entire repo is 2-3 array access functions. Why is it even posted here? No harm in trying to learn but there is nothing in here that is even close to PyTorch.
I posted when I had more operations including unary, binary, and matmul but got a lot of help from Claude when writing that and realized I didn’t really understand broadcast operations so got rid of all that and started fresh.
Right now it's just a tensor manipulation lib but will be adding an autograd engine soon. It's been fun learning about strides and doing matmuls by hand and then coding it without numpy.
Thanks for sharing! I used to teach students to build ML algorithms from scratch (everything from Markov chains to multilayer perceptrons and convolution neural networks) - I rewrote some of my notes in TypeScript here:
I don't want to put you off, but there's no substance at all here, I'd have assumed Claude wrote it based on the fact you've vendored in rules, but the code is so questionable, even an LLM from 2022 would do better. E.g. 'flattenData' from utils could just be [1] rather than a BFS, though I don't really get why your public API allows TensorData to be a single integer in the first place, 50% of your logic is to work around that.
But rant over. My point is, maybe post this when you've built even 5% of PyTorch, or learnt something of value, or have something tangible to impart upon us, rather than a library of ill-thought-out array utils.
1: flattenData = (x: TensorData) => Array.isArray(x) ? x.flat(Infinity) : [x]
2. It’s clear you didn’t read the rules folder because it’s a rule to tell cursor to teach me
3. You say you don’t understand why TensorData can be a single integer. Scalars are tensors too. I’m trying to support PyTorch’s spec and ops as much as possible which is why I’m supporting it. The obvious use cases are for reduction ops and scalar unary ops.
4. Fine I’ll post after I’ve made more progress.
Will post again after making more progress.
1. https://github.com/keshavsaharia/numbers/blob/dev/lib/nn/neu...
2. https://github.com/keshavsaharia/numbers/blob/dev/lib/cnn/cn... (still working on the visualization)
Hope you find these useful in your own learning journey!