Authors: Koki Tsuyuzaki [aut, cre]
Last modified: 2022-04-21 02:27:49
Compiled: Tue Apr 26 16:18:39 2022

1 What is einsum

einsum is an easy and intuitive way to write tensor operations.

It was originally introduced by Numpy1 https://numpy.org/doc/stable/reference/generated/numpy.einsum.html package of Python but similar tools have been implemented in other languages (e.g. R, Julia) inspired by Numpy. In this vignette, we will use CRAN einsum package first.

einsum is named after Einstein summation2 https://en.wikipedia.org/wiki/Einstein_notation introduced by Albert Einstein, which is a notational convention that implies summation over a set of indexed terms in a formula.

Here, we consider a simple example of einsum; matrix multiplication. If we naively implement the matrix multiplication, the calculation would look like the following in a for loop.

A <- matrix(runif(3*4), nrow=3, ncol=4)
B <- matrix(runif(4*5), nrow=4, ncol=5)
C <- matrix(0, nrow=3, ncol=5)

I <- nrow(A)
J <- ncol(A)
K <- ncol(B)

for(i in 1:I){
  for(j in 1:J){
    for(k in 1:K){
      C[i,k] = C[i,k] + A[i,j] * B[j,k]
    }
  }
}

Therefore, any programming language can implement this. However, when analyzing tensor data, such operations tend to be more complicated and increase the possibility of causing bugs because the order of tensors is larger or more tensors are handled simultaneously. In addition, several programming languages, especially R, are known to significantly slow down the speed of computation if the code is written in for loop.

Obviously, in the case of the R language, it should be executed using the built-in matrix multiplication function (%*%) prepared by the R, as shown below.

C <- A %*% B

However, more complex operations than matrix multiplication are not always provided by programming languages as standard.

einsum is a function that solves such a problem. To put it simply, einsum is a wrapper for the for loop above. Like the Einstein summation, it omits many notations such as for, array size (e.g. I, J, and K), brackets (e.g. {}, (), and []), and even addition operator (+) and extracts the array subscripts (e.g. i, j, and k) to concisely express the tensor operation as follows.

suppressPackageStartupMessages(library("einsum"))
C <- einsum('ij,jk->ik', A, B)

2 Einsum of DelayedTensor

CRAN einsum is easy to use because the syntax is almost the same as that of Numpy‘s einsum, except that it prohibits the implicit modes that do not use’->’. It is extremely fast because the internal calculation is actually performed by C++. When the input tensor is huge, however, it is not scalable because it assumes that the input is R’s standard array.

Using einsum of DelayedTensor, we can augment the CRAN einsum’s functionality; in DelayedTensor, the input DelayedArray objects are divided into multiple block tensors and the CRAN einsum is incremently applied in the block processing.

3 Typical operations of einsum

A surprisingly large number of tensor operations can be handled uniformly in einsum.

In more detail, einsum is capable of performing any tensor operation that can be described by a combination of the following three operations3 https://ajcr.net/Basic-guide-to-einsum/.

  1. Multiplication: the element values of the tensors on the left side of -> are multiplied by each other
  2. Summation: when comparing the left and right sides of ->, if there is a missing subscript on the right, the summation is done in the direction of the subscript
  3. Permutation: the subscripts to the right of -> can be rearranged in any order

Some typical operations are introduced below. Here we use the arrays and DelayedArray objects below.

suppressPackageStartupMessages(library("DelayedTensor"))
suppressPackageStartupMessages(library("DelayedArray"))

arrA <- array(runif(3), dim=c(3))
arrB <- array(runif(3*3), dim=c(3,3))
arrC <- array(runif(3*4), dim=c(3,4))
arrD <- array(runif(3*3*3), dim=c(3,3,3))
arrE <- array(runif(3*4*5), dim=c(3,4,5))

darrA <- DelayedArray(arrA)
darrB <- DelayedArray(arrB)
darrC <- DelayedArray(arrC)
darrD <- DelayedArray(arrD)
darrE <- DelayedArray(arrE)

3.1 No Operation

3.1.2 diag

We can also extract the diagonal elements as follows.

einsum::einsum('ii->i', arrB)
## [1] 0.3632094 0.9056445 0.7821177
DelayedTensor::einsum('ii->i', darrB)
## <3> array of class HDF5Array and type "double":
##       [1]       [2]       [3] 
## 0.3632094 0.9056445 0.7821177
einsum::einsum('iii->i', arrD)
## [1] 0.3214328 0.2450162 0.6501136
DelayedTensor::einsum('iii->i', darrD)
## <3> array of class HDF5Array and type "double":
##       [1]       [2]       [3] 
## 0.3214328 0.2450162 0.6501136

3.2 Multiplication

By using multiple arrays or DelayedArray objects as input and writing “,” on the right side of ->, multiplication will be performed.

3.2.1 Hadamard Product

Hadamard Product can also be implemented in einsum, multiplying by the product of each element.

einsum::einsum('i,i->i', arrA, arrA)
## [1] 0.1573571 0.1983728 0.1903268
DelayedTensor::einsum('i,i->i', darrA, darrA)
## <3> array of class HDF5Array and type "double":
##       [1]       [2]       [3] 
## 0.1573571 0.1983728 0.1903268
einsum::einsum('ij,ij->ij', arrC, arrC)
##              [,1]      [,2]       [,3]       [,4]
## [1,] 9.527117e-05 0.5153143 0.64635155 0.36438304
## [2,] 8.977840e-02 0.7779731 0.09696237 0.25201790
## [3,] 7.933092e-01 0.7032446 0.39215092 0.05794673
DelayedTensor::einsum('ij,ij->ij', darrC, darrC)
## <3 x 4> matrix of class HDF5Matrix and type "double":
##              [,1]         [,2]         [,3]         [,4]
## [1,] 9.527117e-05 5.153143e-01 6.463516e-01 3.643830e-01
## [2,] 8.977840e-02 7.779731e-01 9.696237e-02 2.520179e-01
## [3,] 7.933092e-01 7.032446e-01 3.921509e-01 5.794673e-02
einsum::einsum('ijk,ijk->ijk', arrE, arrE)
## , , 1
## 
##           [,1]       [,2]         [,3]        [,4]
## [1,] 0.4890149 0.13146181 0.0005490505 0.003796461
## [2,] 0.3614990 0.94558591 0.5367700021 0.081639950
## [3,] 0.6871453 0.04479398 0.0698865950 0.020126520
## 
## , , 2
## 
##             [,1]       [,2]       [,3]      [,4]
## [1,] 0.439972623 0.06426262 0.57908122 0.7821088
## [2,] 0.310428890 0.13083715 0.03503462 0.1037905
## [3,] 0.005523494 0.57689340 0.90064699 0.2498730
## 
## , , 3
## 
##             [,1]        [,2]      [,3]       [,4]
## [1,] 0.002112074 0.028281176 0.3908959 0.63297998
## [2,] 0.714505036 0.004694572 0.1554589 0.04555316
## [3,] 0.029241397 0.988099432 0.5699924 0.11690282
## 
## , , 4
## 
##            [,1]        [,2]       [,3]      [,4]
## [1,] 0.54795321 0.059756258 0.11241257 0.1246062
## [2,] 0.33293522 0.001509516 0.05032845 0.8001988
## [3,] 0.06762975 0.603478459 0.02570139 0.5526925
## 
## , , 5
## 
##            [,1]       [,2]        [,3]      [,4]
## [1,] 0.53529209 0.70171115 0.101737564 0.6714189
## [2,] 0.59824648 0.20068875 0.027731101 0.2576608
## [3,] 0.01227194 0.00723296 0.007695155 0.4836611
DelayedTensor::einsum('ijk,ijk->ijk', darrE, darrE)
## <3 x 4 x 5> array of class HDF5Array and type "double":
## ,,1
##              [,1]         [,2]         [,3]         [,4]
## [1,] 0.4890148864 0.1314618072 0.0005490505 0.0037964608
## [2,] 0.3614989602 0.9455859059 0.5367700021 0.0816399503
## [3,] 0.6871453175 0.0447939811 0.0698865950 0.0201265196
## 
## ,,2
##             [,1]        [,2]        [,3]        [,4]
## [1,] 0.439972623 0.064262622 0.579081220 0.782108840
## [2,] 0.310428890 0.130837145 0.035034623 0.103790458
## [3,] 0.005523494 0.576893402 0.900646991 0.249873004
## 
## ,,3
##             [,1]        [,2]        [,3]        [,4]
## [1,] 0.002112074 0.028281176 0.390895868 0.632979979
## [2,] 0.714505036 0.004694572 0.155458867 0.045553164
## [3,] 0.029241397 0.988099432 0.569992433 0.116902819
## 
## ,,4
##             [,1]        [,2]        [,3]        [,4]
## [1,] 0.547953210 0.059756258 0.112412569 0.124606198
## [2,] 0.332935222 0.001509516 0.050328446 0.800198818
## [3,] 0.067629754 0.603478459 0.025701389 0.552692502
## 
## ,,5
##             [,1]        [,2]        [,3]        [,4]
## [1,] 0.535292091 0.701711154 0.101737564 0.671418856
## [2,] 0.598246479 0.200688746 0.027731101 0.257660777
## [3,] 0.012271942 0.007232960 0.007695155 0.483661104

3.2.2 Outer Product

The outer product can also be implemented in einsum, in which the subscripts in the input array are all different, and all of them are kept.

einsum::einsum('i,j->ij', arrA, arrA)
##           [,1]      [,2]      [,3]
## [1,] 0.1573571 0.1766787 0.1730586
## [2,] 0.1766787 0.1983728 0.1943081
## [3,] 0.1730586 0.1943081 0.1903268
DelayedTensor::einsum('i,j->ij', darrA, darrA)
## <3 x 3> matrix of class HDF5Matrix and type "double":
##           [,1]      [,2]      [,3]
## [1,] 0.1573571 0.1766787 0.1730586
## [2,] 0.1766787 0.1983728 0.1943081
## [3,] 0.1730586 0.1943081 0.1903268
einsum::einsum('ij,klm->ijklm', arrC, arrE)
## , , 1, 1, 1
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.006825615 0.5019924 0.5622060 0.4221241
## [2,] 0.209530362 0.6167985 0.2177523 0.3510563
## [3,] 0.622848298 0.5864274 0.4379128 0.1683354
## 
## , , 2, 1, 1
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.005868597 0.4316081 0.4833792 0.3629381
## [2,] 0.180152150 0.5303173 0.1872213 0.3018347
## [3,] 0.535518858 0.5042045 0.3765131 0.1447331
## 
## , , 3, 1, 1
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.008091053 0.5950595 0.6664364 0.5003840
## [2,] 0.248376337 0.7311502 0.2581225 0.4161405
## [3,] 0.738321539 0.6951484 0.5190999 0.1995440
## 
## , , 1, 2, 1
## 
##           [,1]      [,2]      [,3]       [,4]
## [1,] 0.0035390 0.2602771 0.2914971 0.21886629
## [2,] 0.1086390 0.3198027 0.1129019 0.18201849
## [3,] 0.3229394 0.3040556 0.2270526 0.08727991
## 
## , , 2, 2, 1
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.009491421 0.6980501 0.7817806 0.5869885
## [2,] 0.291364354 0.8576948 0.3027974 0.4881645
## [3,] 0.866107377 0.8154620 0.6089437 0.2340804
## 
## , , 3, 2, 1
## 
##             [,1]      [,2]       [,3]       [,4]
## [1,] 0.002065811 0.1519308 0.17015481 0.12775824
## [2,] 0.063415549 0.1866776 0.06590395 0.10624917
## [3,] 0.188508559 0.1774856 0.13253679 0.05094767
## 
## , , 1, 3, 1
## 
##              [,1]       [,2]        [,3]        [,4]
## [1,] 0.0002287109 0.01682063 0.018838249 0.014144422
## [2,] 0.0070208882 0.02066752 0.007296385 0.011763101
## [3,] 0.0208702369 0.01964986 0.014673467 0.005640539
## 
## , , 2, 3, 1
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.007151133 0.5259327 0.5890179 0.4422555
## [2,] 0.219523006 0.6462141 0.2281370 0.3677984
## [3,] 0.652552354 0.6143945 0.4587972 0.1763635
## 
## , , 3, 3, 1
## 
##             [,1]      [,2]       [,3]       [,4]
## [1,] 0.002580344 0.1897724 0.21253543 0.15957910
## [2,] 0.079210520 0.2331735 0.08231871 0.13271275
## [3,] 0.235460566 0.2216921 0.16554785 0.06363725
## 
## , , 1, 4, 1
## 
##              [,1]       [,2]       [,3]       [,4]
## [1,] 0.0006014094 0.04423088 0.04953633 0.03719363
## [2,] 0.0184618571 0.05434652 0.01918629 0.03093180
## [3,] 0.0548795704 0.05167050 0.03858478 0.01483214
## 
## , , 2, 4, 1
## 
##             [,1]      [,2]       [,3]       [,4]
## [1,] 0.002788895 0.2051103 0.22971310 0.17247670
## [2,] 0.085612522 0.2520192 0.08897192 0.14343894
## [3,] 0.254491105 0.2396098 0.17892787 0.06878058
## 
## , , 3, 4, 1
## 
##            [,1]      [,2]       [,3]       [,4]
## [1,] 0.00138473 0.1018405 0.11405616 0.08563739
## [2,] 0.04250796 0.1251315 0.04417596 0.07121968
## [3,] 0.12635883 0.1189700 0.08884049 0.03415064
## 
## , , 1, 1, 2
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.006474311 0.4761556 0.5332701 0.4003980
## [2,] 0.198746162 0.5850529 0.2065449 0.3329880
## [3,] 0.590791270 0.5562449 0.4153741 0.1596715
## 
## , , 2, 1, 2
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.005438283 0.3999605 0.4479355 0.3363258
## [2,] 0.166942529 0.4914319 0.1734933 0.2797028
## [3,] 0.496252044 0.4672338 0.3489054 0.1341206
## 
## , , 3, 1, 2
## 
##              [,1]       [,2]       [,3]       [,4]
## [1,] 0.0007254169 0.05335106 0.05975047 0.04486276
## [2,] 0.0222685971 0.06555249 0.02314241 0.03730978
## [3,] 0.0661954558 0.06232469 0.04654077 0.01789046
## 
## , , 1, 2, 2
## 
##             [,1]      [,2]       [,3]       [,4]
## [1,] 0.002474343 0.1819765 0.20380443 0.15302356
## [2,] 0.075956535 0.2235947 0.07893704 0.12726088
## [3,] 0.225787795 0.2125849 0.15874711 0.06102302
## 
## , , 2, 2, 2
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.003530582 0.2596579 0.2908037 0.2183457
## [2,] 0.108380576 0.3190420 0.1126334 0.1815855
## [3,] 0.322171243 0.3033324 0.2265125 0.0870723
## 
## , , 3, 2, 2
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.007413589 0.5452352 0.6106357 0.4584868
## [2,] 0.227579799 0.6699310 0.2365099 0.3812971
## [3,] 0.676501912 0.6369436 0.4756357 0.1828362
## 
## , , 1, 3, 2
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.007427634 0.5462681 0.6117925 0.4593554
## [2,] 0.228010929 0.6712001 0.2369580 0.3820194
## [3,] 0.677783485 0.6381503 0.4765367 0.1831826
## 
## , , 2, 3, 2
## 
##             [,1]      [,2]       [,3]       [,4]
## [1,] 0.001826962 0.1343646 0.15048150 0.11298682
## [2,] 0.056083441 0.1650939 0.05828413 0.09396463
## [3,] 0.166713193 0.1569647 0.11721288 0.04505710
## 
## , , 3, 3, 2
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.009263136 0.6812608 0.7629774 0.5728704
## [2,] 0.284356542 0.8370658 0.2955146 0.4764233
## [3,] 0.845276010 0.7958487 0.5942975 0.2284503
## 
## , , 1, 4, 2
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.008632058 0.6348479 0.7109974 0.5338419
## [2,] 0.264983920 0.7800382 0.2753818 0.4439656
## [3,] 0.787689107 0.7416292 0.5538093 0.2128865
## 
## , , 2, 4, 2
## 
##             [,1]      [,2]      [,3]       [,4]
## [1,] 0.003144557 0.2312676 0.2590080 0.19447232
## [2,] 0.096530518 0.2841587 0.1003183 0.16173142
## [3,] 0.286945856 0.2701668 0.2017462 0.07755203
## 
## , , 3, 4, 2
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.004879108 0.3588358 0.4018778 0.3017441
## [2,] 0.149777160 0.4409019 0.1556544 0.2509432
## [3,] 0.445226403 0.4191919 0.3130302 0.1203301
## 
## , , 1, 1, 3
## 
##              [,1]       [,2]       [,3]       [,4]
## [1,] 0.0004485753 0.03299064 0.03694784 0.02774174
## [2,] 0.0137702088 0.04053563 0.01431055 0.02307121
## [3,] 0.0409332140 0.03853965 0.02877937 0.01106290
## 
## , , 2, 1, 3
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.008250559 0.6067904 0.6795745 0.5102485
## [2,] 0.253272811 0.7455640 0.2632111 0.4243443
## [3,] 0.752876756 0.7088525 0.5293334 0.2034778
## 
## , , 3, 1, 3
## 
##            [,1]      [,2]       [,3]       [,4]
## [1,] 0.00166909 0.1227539 0.13747808 0.10322339
## [2,] 0.05123715 0.1508278 0.05324768 0.08584495
## [3,] 0.15230715 0.1434010 0.10708427 0.04116362
## 
## , , 1, 2, 3
## 
##             [,1]      [,2]       [,3]       [,4]
## [1,] 0.001641457 0.1207216 0.13520200 0.10151444
## [2,] 0.050388874 0.1483307 0.05236611 0.08442371
## [3,] 0.149785568 0.1410269 0.10531139 0.04048212
## 
## , , 2, 2, 3
## 
##             [,1]       [,2]       [,3]       [,4]
## [1,] 0.000668773 0.04918516 0.05508488 0.04135967
## [2,] 0.020529762 0.06043385 0.02133534 0.03439646
## [3,] 0.061026608 0.05745809 0.04290665 0.01649349
## 
## , , 3, 2, 3
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.009702442 0.7135697 0.7991618 0.6000389
## [2,] 0.297842211 0.8767638 0.3095294 0.4990178
## [3,] 0.885363405 0.8335920 0.6224822 0.2392846
## 
## , , 1, 3, 3
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.006102549 0.4488142 0.5026491 0.3774067
## [2,] 0.187333938 0.5514585 0.1946848 0.3138674
## [3,] 0.556867384 0.5243047 0.3915229 0.1505029
## 
## , , 2, 3, 3
## 
##             [,1]      [,2]      [,3]       [,4]
## [1,] 0.003848473 0.2830374 0.3169875 0.23800541
## [2,] 0.118139104 0.3477683 0.1227748 0.19793539
## [3,] 0.351179366 0.3306442 0.2469075 0.09491224
## 
## , , 3, 3, 3
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.007369114 0.5419642 0.6069724 0.4557363
## [2,] 0.226214515 0.6659120 0.2350911 0.3790096
## [3,] 0.672443481 0.6331225 0.4727822 0.1817394
## 
## , , 1, 4, 3
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.007765613 0.5711249 0.6396308 0.4802574
## [2,] 0.238386090 0.7017417 0.2477403 0.3994024
## [3,] 0.708624607 0.6671880 0.4982205 0.1915179
## 
## , , 2, 4, 3
## 
##             [,1]      [,2]       [,3]       [,4]
## [1,] 0.002083243 0.1532129 0.17159067 0.12883633
## [2,] 0.063950685 0.1882528 0.06646008 0.10714576
## [3,] 0.190099300 0.1789833 0.13365521 0.05137759
## 
## , , 3, 4, 3
## 
##             [,1]      [,2]      [,3]       [,4]
## [1,] 0.003337285 0.2454418 0.2748824 0.20639139
## [2,] 0.102446804 0.3015746 0.1064668 0.17164383
## [3,] 0.304532560 0.2867251 0.2141111 0.08230514
## 
## , , 1, 1, 4
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.007225243 0.5313832 0.5951222 0.4468387
## [2,] 0.221798018 0.6529110 0.2305013 0.3716100
## [3,] 0.659315036 0.6207618 0.4635519 0.1781912
## 
## , , 2, 1, 4
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.005631974 0.4142056 0.4638892 0.3483044
## [2,] 0.172888376 0.5089348 0.1796724 0.2896647
## [3,] 0.513926620 0.4838749 0.3613321 0.1388975
## 
## , , 3, 1, 4
## 
##             [,1]      [,2]       [,3]      [,4]
## [1,] 0.002538339 0.1866831 0.20907558 0.1569813
## [2,] 0.077921055 0.2293777 0.08097865 0.1305523
## [3,] 0.231627514 0.2180832 0.16285291 0.0626013
## 
## , , 1, 2, 4
## 
##             [,1]      [,2]       [,3]       [,4]
## [1,] 0.002386011 0.1754801 0.19652875 0.14756072
## [2,] 0.073244939 0.2156125 0.07611904 0.12271775
## [3,] 0.217727327 0.2049958 0.15307995 0.05884454
## 
## , , 2, 2, 4
## 
##              [,1]       [,2]       [,3]        [,4]
## [1,] 0.0003792274 0.02789042 0.03123585 0.023452977
## [2,] 0.0116413898 0.03426898 0.01209819 0.019504490
## [3,] 0.0346051035 0.03258158 0.02433019 0.009352622
## 
## , , 3, 2, 4
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.007582486 0.5576568 0.6245472 0.4689321
## [2,] 0.232764535 0.6851934 0.2418981 0.3899838
## [3,] 0.691914017 0.6514545 0.4864716 0.1870016
## 
## , , 1, 3, 4
## 
##             [,1]      [,2]      [,3]       [,4]
## [1,] 0.003272564 0.2406819 0.2695516 0.20238882
## [2,] 0.100460043 0.2957261 0.1044021 0.16831512
## [3,] 0.298626730 0.2811646 0.2099588 0.08070899
## 
## , , 2, 3, 4
## 
##             [,1]      [,2]       [,3]       [,4]
## [1,] 0.002189715 0.1610434 0.18036039 0.13542094
## [2,] 0.067219098 0.1978741 0.06985675 0.11262180
## [3,] 0.199814961 0.1881308 0.14048611 0.05400342
## 
## , , 3, 3, 4
## 
##             [,1]      [,2]       [,3]       [,4]
## [1,] 0.001564801 0.1150838 0.12888806 0.09677371
## [2,] 0.048035711 0.1414036 0.04992061 0.08048112
## [3,] 0.142790574 0.1344409 0.10039334 0.03859160
## 
## , , 1, 4, 4
## 
##             [,1]      [,2]      [,3]       [,4]
## [1,] 0.003445487 0.2533996 0.2837947 0.21308305
## [2,] 0.105768354 0.3113523 0.1099187 0.17720890
## [3,] 0.314406173 0.2960214 0.2210530 0.08497365
## 
## , , 2, 4, 4
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.008731316 0.6421479 0.7191730 0.5399804
## [2,] 0.268030907 0.7890077 0.2785483 0.4490706
## [3,] 0.796746556 0.7501570 0.5601774 0.2153344
## 
## , , 3, 4, 4
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.007256422 0.5336762 0.5976903 0.4487669
## [2,] 0.222755127 0.6557285 0.2314960 0.3732136
## [3,] 0.662160132 0.6234405 0.4655522 0.1789601
## 
## , , 1, 1, 5
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.007141282 0.5252082 0.5882065 0.4416462
## [2,] 0.219220587 0.6453238 0.2278227 0.3672917
## [3,] 0.651653385 0.6135481 0.4581651 0.1761205
## 
## , , 2, 1, 5
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.007549546 0.5552341 0.6218340 0.4668949
## [2,] 0.231753338 0.6822167 0.2408472 0.3882896
## [3,] 0.688908142 0.6486244 0.4843582 0.1861892
## 
## , , 3, 1, 5
## 
##             [,1]       [,2]       [,3]       [,4]
## [1,] 0.001081278 0.07952300 0.08906171 0.06687068
## [2,] 0.033192699 0.09770998 0.03449517 0.05561249
## [3,] 0.098668356 0.09289875 0.06937185 0.02666681
## 
## , , 1, 2, 5
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.008176359 0.6013333 0.6734628 0.5056596
## [2,] 0.250995025 0.7388588 0.2608440 0.4205280
## [3,] 0.746105828 0.7024775 0.5245728 0.2016479
## 
## , , 2, 2, 5
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.004372625 0.3215863 0.3601604 0.2704211
## [2,] 0.134229333 0.3951334 0.1394964 0.2248937
## [3,] 0.399009056 0.3756771 0.2805357 0.1078390
## 
## , , 3, 2, 5
## 
##             [,1]       [,2]       [,3]       [,4]
## [1,] 0.000830116 0.06105119 0.06837423 0.05133778
## [2,] 0.025482613 0.07501365 0.02648254 0.04269468
## [3,] 0.075749415 0.07131999 0.05325798 0.02047258
## 
## , , 1, 3, 5
## 
##             [,1]      [,2]       [,3]       [,4]
## [1,] 0.003113303 0.2289690 0.25643368 0.19253946
## [2,] 0.095571101 0.2813345 0.09932127 0.16012398
## [3,] 0.284093900 0.2674816 0.19974103 0.07678124
## 
## , , 2, 3, 5
## 
##             [,1]      [,2]       [,3]       [,4]
## [1,] 0.001625415 0.1195418 0.13388069 0.10052235
## [2,] 0.049896431 0.1468811 0.05185435 0.08359865
## [3,] 0.148321736 0.1396487 0.10428220 0.04008649
## 
## , , 3, 3, 5
## 
##             [,1]       [,2]       [,3]       [,4]
## [1,] 0.000856228 0.06297161 0.07052500 0.05295266
## [2,] 0.026284191 0.07737327 0.02731557 0.04403768
## [3,] 0.078132178 0.07356342 0.05493325 0.02111656
## 
## , , 1, 4, 5
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.007997928 0.5882106 0.6587660 0.4946247
## [2,] 0.245517634 0.7227349 0.2551517 0.4113509
## [3,] 0.729823779 0.6871475 0.5131252 0.1972474
## 
## , , 2, 4, 5
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.004954558 0.3643848 0.4080924 0.3064102
## [2,] 0.152093299 0.4477199 0.1580614 0.2548237
## [3,] 0.452111338 0.4256742 0.3178709 0.1221908
## 
## , , 3, 4, 5
## 
##             [,1]      [,2]      [,3]      [,4]
## [1,] 0.006788148 0.4992369 0.5591199 0.4198070
## [2,] 0.208380226 0.6134128 0.2165570 0.3491293
## [3,] 0.619429413 0.5832084 0.4355091 0.1674114
DelayedTensor::einsum('ij,klm->ijklm', darrC, darrE)
## <3 x 4 x 3 x 4 x 5> array of class HDF5Array and type "double":
## ,,1,1,1
##             [,1]        [,2]        [,3]        [,4]
## [1,] 0.006825615 0.501992378 0.562205951 0.422124069
## [2,] 0.209530362 0.616798516 0.217752252 0.351056273
## [3,] 0.622848298 0.586427403 0.437912817 0.168335422
## 
## ,,2,1,1
##             [,1]        [,2]        [,3]        [,4]
## [1,] 0.005868597 0.431608123 0.483379162 0.362938134
## [2,] 0.180152150 0.530317314 0.187221251 0.301834741
## [3,] 0.535518858 0.504204529 0.376513145 0.144733145
## 
## ,,3,1,1
##             [,1]        [,2]        [,3]        [,4]
## [1,] 0.008091053 0.595059480 0.666436376 0.500383951
## [2,] 0.248376337 0.731150154 0.258122529 0.416140509
## [3,] 0.738321539 0.695148374 0.519099861 0.199544043
## 
## ...
## 
## ,,1,4,5
##             [,1]        [,2]        [,3]        [,4]
## [1,] 0.007997928 0.588210605 0.658765983 0.494624749
## [2,] 0.245517634 0.722734934 0.255151651 0.411350913
## [3,] 0.729823779 0.687147519 0.513125247 0.197247378
## 
## ,,2,4,5
##             [,1]        [,2]        [,3]        [,4]
## [1,] 0.004954558 0.364384788 0.408092444 0.306410209
## [2,] 0.152093299 0.447719939 0.158061381 0.254823722
## [3,] 0.452111338 0.425674242 0.317870901 0.122190834
## 
## ,,3,4,5
##             [,1]        [,2]        [,3]        [,4]
## [1,] 0.006788148 0.499236885 0.559119938 0.419806982
## [2,] 0.208380226 0.613412839 0.216556985 0.349129286
## [3,] 0.619429413 0.583208436 0.435509064 0.167411410

3.3 Summation

If there is a vanishing subscript on the left or right side of ->, the summation is done for that subscript.

einsum::einsum('i->', arrA)
## [1] 1.278338
DelayedTensor::einsum('i->', darrA)
## <1> array of class HDF5Array and type "double":
##      [1] 
## 1.278338
einsum::einsum('ij->', arrC)
## [1] 6.726493
DelayedTensor::einsum('ij->', darrC)
## <1> array of class HDF5Array and type "double":
##      [1] 
## 6.726493
einsum::einsum('ijk->', arrE)
## [1] 27.80788
DelayedTensor::einsum('ijk->', darrE)
## <1> array of class HDF5Array and type "double":
##      [1] 
## 27.80788

3.3.1 Row-wise / Column-wise Summation

einsum::einsum('ij->i', arrC)
## [1] 2.135216 1.995060 2.596217
DelayedTensor::einsum('ij->i', darrC)
## <3> array of class HDF5Array and type "double":
##      [1]      [2]      [3] 
## 2.135216 1.995060 2.596217
einsum::einsum('ij->j', arrC)
## [1] 1.200070 2.438479 1.741567 1.346377
DelayedTensor::einsum('ij->j', darrC)
## <4> array of class HDF5Array and type "double":
##      [1]      [2]      [3]      [4] 
## 1.200070 2.438479 1.741567 1.346377

3.3.2 Mode-wise Vectorization

einsum::einsum('ijk->i', arrE)
## [1] 9.724661 9.172078 8.911142
DelayedTensor::einsum('ijk->i', darrE)
## <3> array of class HDF5Array and type "double":
##      [1]      [2]      [3] 
## 9.724661 9.172078 8.911142
einsum::einsum('ijk->j', arrE)
## [1] 7.679695 6.582958 5.985238 7.559990
DelayedTensor::einsum('ijk->j', darrE)
## <4> array of class HDF5Array and type "double":
##      [1]      [2]      [3]      [4] 
## 7.679695 6.582958 5.985238 7.559990
einsum::einsum('ijk->k', arrE)
## [1] 5.185770 6.273117 5.418382 5.348347 5.582265
DelayedTensor::einsum('ijk->k', darrE)
## <5> array of class HDF5Array and type "double":
##      [1]      [2]      [3]      [4]      [5] 
## 5.185770 6.273117 5.418382 5.348347 5.582265

3.3.3 Mode-wise Summation

These are the same as what the modeSum function does.

einsum::einsum('ijk->ij', arrE)
##          [,1]     [,2]     [,3]     [,4]
## [1,] 2.880433 1.866380 2.063865 2.913983
## [2,] 3.354163 1.889479 1.704971 2.223465
## [3,] 1.445100 2.827098 2.216402 2.422542
DelayedTensor::einsum('ijk->ij', darrE)
## <3 x 4> matrix of class HDF5Matrix and type "double":
##          [,1]     [,2]     [,3]     [,4]
## [1,] 2.880433 1.866380 2.063865 2.913983
## [2,] 3.354163 1.889479 1.704971 2.223465
## [3,] 1.445100 2.827098 2.216402 2.422542
einsum::einsum('ijk->jk', arrE)
##           [,1]     [,2]     [,3]      [,4]      [,5]
## [1,] 2.1294861 1.294786 1.062242 1.5773012 1.6158793
## [2,] 1.5466349 1.374750 1.230719 1.0601422 1.3707118
## [3,] 1.0204384 1.897173 1.774478 0.7199364 0.5732119
## [4,] 0.4892104 1.706407 1.350943 1.9909673 2.0224620
DelayedTensor::einsum('ijk->jk', darrE)
## <4 x 5> matrix of class HDF5Matrix and type "double":
##           [,1]      [,2]      [,3]      [,4]      [,5]
## [1,] 2.1294861 1.2947860 1.0622425 1.5773012 1.6158793
## [2,] 1.5466349 1.3747499 1.2307189 1.0601422 1.3707118
## [3,] 1.0204384 1.8971735 1.7744778 0.7199364 0.5732119
## [4,] 0.4892104 1.7064074 1.3509428 1.9909673 2.0224620
einsum::einsum('ijk->jk', arrE)
##           [,1]     [,2]     [,3]      [,4]      [,5]
## [1,] 2.1294861 1.294786 1.062242 1.5773012 1.6158793
## [2,] 1.5466349 1.374750 1.230719 1.0601422 1.3707118
## [3,] 1.0204384 1.897173 1.774478 0.7199364 0.5732119
## [4,] 0.4892104 1.706407 1.350943 1.9909673 2.0224620
DelayedTensor::einsum('ijk->jk', darrE)
## <4 x 5> matrix of class HDF5Matrix and type "double":
##           [,1]      [,2]      [,3]      [,4]      [,5]
## [1,] 2.1294861 1.2947860 1.0622425 1.5773012 1.6158793
## [2,] 1.5466349 1.3747499 1.2307189 1.0601422 1.3707118
## [3,] 1.0204384 1.8971735 1.7744778 0.7199364 0.5732119
## [4,] 0.4892104 1.7064074 1.3509428 1.9909673 2.0224620

3.3.4 Trace

If we take the diagonal elements of a matrix and add them together, we get trace.

einsum::einsum('ii->', arrB)
## [1] 2.050972
DelayedTensor::einsum('ii->', darrB)
## <1> array of class HDF5Array and type "double":
##      [1] 
## 2.050972

3.4 Permutation

By changing the order of the indices on the left and right side of ->, we can get a sorted array or DelayedArray.

einsum::einsum('ij->ji', arrB)
##           [,1]      [,2]      [,3]
## [1,] 0.3632094 0.9492059 0.5845347
## [2,] 0.3342510 0.9056445 0.3242420
## [3,] 0.9439185 0.8411172 0.7821177
DelayedTensor::einsum('ij->ji', darrB)
## <3 x 3> matrix of class DelayedArray and type "double":
##           [,1]      [,2]      [,3]
## [1,] 0.3632094 0.9492059 0.5845347
## [2,] 0.3342510 0.9056445 0.3242420
## [3,] 0.9439185 0.8411172 0.7821177
einsum::einsum('ijk->jki', arrD)
## , , 1
## 
##           [,1]       [,2]      [,3]
## [1,] 0.3214328 0.66542812 0.9281013
## [2,] 0.2484109 0.47605905 0.7681962
## [3,] 0.4787834 0.03448109 0.9886049
## 
## , , 2
## 
##           [,1]      [,2]      [,3]
## [1,] 0.3937839 0.7152411 0.3888993
## [2,] 0.3401385 0.2450162 0.7065700
## [3,] 0.4349370 0.2367576 0.6612532
## 
## , , 3
## 
##            [,1]      [,2]      [,3]
## [1,] 0.76743375 0.7709629 0.9085275
## [2,] 0.43828353 0.3429240 0.6025874
## [3,] 0.04469795 0.2834940 0.6501136
DelayedTensor::einsum('ijk->jki', darrD)
## <3 x 3 x 3> array of class DelayedArray and type "double":
## ,,1
##            [,1]       [,2]       [,3]
## [1,] 0.32143276 0.66542812 0.92810127
## [2,] 0.24841091 0.47605905 0.76819620
## [3,] 0.47878337 0.03448109 0.98860493
## 
## ,,2
##           [,1]      [,2]      [,3]
## [1,] 0.3937839 0.7152411 0.3888993
## [2,] 0.3401385 0.2450162 0.7065700
## [3,] 0.4349370 0.2367576 0.6612532
## 
## ,,3
##            [,1]       [,2]       [,3]
## [1,] 0.76743375 0.77096289 0.90852748
## [2,] 0.43828353 0.34292405 0.60258744
## [3,] 0.04469795 0.28349395 0.65011365

3.5 Multiplication + Summation

Some examples of combining Multiplication and Summation are shown below.

3.5.1 Inner Product (Squared Frobenius Norm)

Inner Product first calculate Hadamard Product and collapses it to 0D tensor (norm).

einsum::einsum('i,i->', arrA, arrA)
## [1] 0.5460566
DelayedTensor::einsum('i,i->', darrA, darrA)
## <1> array of class HDF5Array and type "double":
##       [1] 
## 0.5460566
einsum::einsum('ij,ij->', arrC, arrC)
## [1] 4.689527
DelayedTensor::einsum('ij,ij->', darrC, darrC)
## <1> array of class HDF5Array and type "double":
##      [1] 
## 4.689527
einsum::einsum('ijk,ijk->', arrE, arrE)
## [1] 18.11399
DelayedTensor::einsum('ijk,ijk->', darrE, darrE)
## <1> array of class HDF5Array and type "double":
##      [1] 
## 18.11399

3.5.2 Contracted Product

The inner product is an operation that eliminates all subscripts, while the outer product is an operation that leaves all subscripts intact. In the middle of the two, the operation that eliminates some subscripts while keeping others by summing them is called contracted product.

einsum::einsum('ijk,ijk->jk', arrE, arrE)
##           [,1]      [,2]      [,3]      [,4]      [,5]
## [1,] 1.5376592 0.7559250 0.7458585 0.9485182 1.1458105
## [2,] 1.1218417 0.7719932 1.0210752 0.6647442 0.9096329
## [3,] 0.6072056 1.5147628 1.1163472 0.1884424 0.1371638
## [4,] 0.1055629 1.1357723 0.7954360 1.4774975 1.4127407
DelayedTensor::einsum('ijk,ijk->jk', darrE, darrE)
## <4 x 5> matrix of class HDF5Matrix and type "double":
##           [,1]      [,2]      [,3]      [,4]      [,5]
## [1,] 1.5376592 0.7559250 0.7458585 0.9485182 1.1458105
## [2,] 1.1218417 0.7719932 1.0210752 0.6647442 0.9096329
## [3,] 0.6072056 1.5147628 1.1163472 0.1884424 0.1371638
## [4,] 0.1055629 1.1357723 0.7954360 1.4774975 1.4127407

3.5.3 Matrix Multiplication

Matrix Multiplication is considered a contracted product.

einsum::einsum('ij,jk->ik', arrC, t(arrC))
##          [,1]     [,2]     [,3]
## [1,] 1.526144 1.189471 1.259448
## [2,] 1.189471 1.216732 1.322383
## [3,] 1.259448 1.322383 1.946651
DelayedTensor::einsum('ij,jk->ik', darrC, t(darrC))
## <3 x 3> matrix of class HDF5Matrix and type "double":
##          [,1]     [,2]     [,3]
## [1,] 1.526144 1.189471 1.259448
## [2,] 1.189471 1.216732 1.322383
## [3,] 1.259448 1.322383 1.946651

3.6 Multiplication + Permutation

Some examples of combining Multiplication and Permutation are shown below.

einsum::einsum('ij,ij->ji', arrC, arrC)
##              [,1]       [,2]       [,3]
## [1,] 9.527117e-05 0.08977840 0.79330919
## [2,] 5.153143e-01 0.77797306 0.70324464
## [3,] 6.463516e-01 0.09696237 0.39215092
## [4,] 3.643830e-01 0.25201790 0.05794673
DelayedTensor::einsum('ij,ij->ji', darrC, darrC)
## <4 x 3> matrix of class HDF5Matrix and type "double":
##              [,1]         [,2]         [,3]
## [1,] 9.527117e-05 8.977840e-02 7.933092e-01
## [2,] 5.153143e-01 7.779731e-01 7.032446e-01
## [3,] 6.463516e-01 9.696237e-02 3.921509e-01
## [4,] 3.643830e-01 2.520179e-01 5.794673e-02
einsum::einsum('ijk,ijk->jki', arrE, arrE)
## , , 1
## 
##              [,1]       [,2]        [,3]       [,4]      [,5]
## [1,] 0.4890148864 0.43997262 0.002112074 0.54795321 0.5352921
## [2,] 0.1314618072 0.06426262 0.028281176 0.05975626 0.7017112
## [3,] 0.0005490505 0.57908122 0.390895868 0.11241257 0.1017376
## [4,] 0.0037964608 0.78210884 0.632979979 0.12460620 0.6714189
## 
## , , 2
## 
##            [,1]       [,2]        [,3]        [,4]      [,5]
## [1,] 0.36149896 0.31042889 0.714505036 0.332935222 0.5982465
## [2,] 0.94558591 0.13083715 0.004694572 0.001509516 0.2006887
## [3,] 0.53677000 0.03503462 0.155458867 0.050328446 0.0277311
## [4,] 0.08163995 0.10379046 0.045553164 0.800198818 0.2576608
## 
## , , 3
## 
##            [,1]        [,2]      [,3]       [,4]        [,5]
## [1,] 0.68714532 0.005523494 0.0292414 0.06762975 0.012271942
## [2,] 0.04479398 0.576893402 0.9880994 0.60347846 0.007232960
## [3,] 0.06988659 0.900646991 0.5699924 0.02570139 0.007695155
## [4,] 0.02012652 0.249873004 0.1169028 0.55269250 0.483661104
DelayedTensor::einsum('ijk,ijk->jki', darrE, darrE)
## <4 x 5 x 3> array of class HDF5Array and type "double":
## ,,1
##              [,1]         [,2]         [,3]         [,4]         [,5]
## [1,] 0.4890148864 0.4399726235 0.0021120744 0.5479532097 0.5352920914
## [2,] 0.1314618072 0.0642626218 0.0282811756 0.0597562583 0.7017111544
## [3,] 0.0005490505 0.5790812196 0.3908958677 0.1124125690 0.1017375639
## [4,] 0.0037964608 0.7821088397 0.6329799788 0.1246061978 0.6714188564
## 
## ,,2
##             [,1]        [,2]        [,3]        [,4]        [,5]
## [1,] 0.361498960 0.310428890 0.714505036 0.332935222 0.598246479
## [2,] 0.945585906 0.130837145 0.004694572 0.001509516 0.200688746
## [3,] 0.536770002 0.035034623 0.155458867 0.050328446 0.027731101
## [4,] 0.081639950 0.103790458 0.045553164 0.800198818 0.257660777
## 
## ,,3
##             [,1]        [,2]        [,3]        [,4]        [,5]
## [1,] 0.687145318 0.005523494 0.029241397 0.067629754 0.012271942
## [2,] 0.044793981 0.576893402 0.988099432 0.603478459 0.007232960
## [3,] 0.069886595 0.900646991 0.569992433 0.025701389 0.007695155
## [4,] 0.020126520 0.249873004 0.116902819 0.552692502 0.483661104

3.7 Summation + Permutation

Some examples of combining Summation and Permutation are shown below.

einsum::einsum('ijk->ki', arrE)
##          [,1]     [,2]      [,3]
## [1,] 1.146920 2.592033 1.4468169
## [2,] 2.562148 1.428216 2.2827524
## [3,] 1.634944 1.521516 2.2619220
## [4,] 1.672965 1.734736 1.9406454
## [5,] 2.707683 1.895576 0.9790051
DelayedTensor::einsum('ijk->ki', darrE)
## <5 x 3> matrix of class HDF5Matrix and type "double":
##           [,1]      [,2]      [,3]
## [1,] 1.1469199 2.5920332 1.4468169
## [2,] 2.5621481 1.4282163 2.2827524
## [3,] 1.6349444 1.5215155 2.2619220
## [4,] 1.6729654 1.7347362 1.9406454
## [5,] 2.7076834 1.8955765 0.9790051

3.8 Multiplication + Summation + Permutation

Finally, we will show a more complex example, combining Multiplication, Summation, and Permutation.

einsum::einsum('i,ij,ijk,ijk,ji->jki',
    arrA, arrC, arrE, arrE, t(arrC))
## , , 1
## 
##              [,1]         [,2]         [,3]         [,4]         [,5]
## [1,] 1.848106e-05 1.662763e-05 7.982040e-08 2.070848e-05 2.022998e-05
## [2,] 2.687293e-02 1.313632e-02 5.781131e-03 1.221515e-02 1.434411e-01
## [3,] 1.407746e-04 1.484744e-01 1.002243e-01 2.882218e-02 2.608515e-02
## [4,] 5.487572e-04 1.130495e-01 9.149373e-02 1.801113e-02 9.704985e-02
## 
## , , 2
## 
##            [,1]       [,2]        [,3]         [,4]        [,5]
## [1,] 0.01445506 0.01241295 0.028570520 0.0133128978 0.023921753
## [2,] 0.32764728 0.04533531 0.001626678 0.0005230502 0.069539024
## [3,] 0.02318102 0.00151301 0.006713666 0.0021734906 0.001197599
## [4,] 0.00916379 0.01165010 0.005113179 0.0898194337 0.028921494
## 
## , , 3
## 
##              [,1]        [,2]        [,3]        [,4]        [,5]
## [1,] 0.2378159674 0.001911641 0.010120233 0.023406163 0.004247229
## [2,] 0.0137428254 0.176991308 0.303149611 0.185147622 0.002219077
## [3,] 0.0119563068 0.154084080 0.097515187 0.004397033 0.001316499
## [4,] 0.0005088005 0.006316816 0.002955316 0.013972125 0.012227004
DelayedTensor::einsum('i,ij,ijk,ijk,ji->jki',
    darrA, darrC, darrE, darrE, t(darrC))
## <4 x 5 x 3> array of class HDF5Array and type "double":
## ,,1
##              [,1]         [,2]         [,3]         [,4]         [,5]
## [1,] 1.848106e-05 1.662763e-05 7.982040e-08 2.070848e-05 2.022998e-05
## [2,] 2.687293e-02 1.313632e-02 5.781131e-03 1.221515e-02 1.434411e-01
## [3,] 1.407746e-04 1.484744e-01 1.002243e-01 2.882218e-02 2.608515e-02
## [4,] 5.487572e-04 1.130495e-01 9.149373e-02 1.801113e-02 9.704985e-02
## 
## ,,2
##              [,1]         [,2]         [,3]         [,4]         [,5]
## [1,] 0.0144550603 0.0124129495 0.0285705203 0.0133128978 0.0239217533
## [2,] 0.3276472751 0.0453353142 0.0016266778 0.0005230502 0.0695390238
## [3,] 0.0231810166 0.0015130096 0.0067136661 0.0021734906 0.0011975988
## [4,] 0.0091637902 0.0116501049 0.0051131785 0.0898194337 0.0289214937
## 
## ,,3
##              [,1]         [,2]         [,3]         [,4]         [,5]
## [1,] 0.2378159674 0.0019116408 0.0101202334 0.0234061630 0.0042472294
## [2,] 0.0137428254 0.1769913078 0.3031496114 0.1851476222 0.0022190774
## [3,] 0.0119563068 0.1540840804 0.0975151871 0.0043970334 0.0013164990
## [4,] 0.0005088005 0.0063168159 0.0029553156 0.0139721247 0.0122270037

4 Create your original function by einsum

By using einsum and other DelayedTensor functions, it is possible to implement your original tensor calculation functions. It is intended to be applied to Delayed Arrays, which can scale to large-scale data since the calculation is performed internally by block processing.

For example, kronecker can be easily implmented by eimsum and other DelayedTensor functions4 https://stackoverflow.com/ questions/56067643/speeding-up-kronecker-products-numpy (the kronecker function inside DelayedTensor has a more efficient implementation though).

darr1 <- DelayedArray(array(1:6, dim=c(2,3)))
darr2 <- DelayedArray(array(20:1, dim=c(4,5)))

mykronecker <- function(darr1, darr2){
    stopifnot((length(dim(darr1)) == 2) && (length(dim(darr2)) == 2))
    # Outer Product
    tmpdarr <- DelayedTensor::einsum('ij,kl->ikjl', darr1, darr2)
    # Reshape
    DelayedTensor::unfold(tmpdarr, row_idx=c(2,1), col_idx=c(4,3))
}

identical(as.array(DelayedTensor::kronecker(darr1, darr2)),
    as.array(mykronecker(darr1, darr2)))
## [1] TRUE

Session information

## R version 4.2.0 RC (2022-04-21 r82226)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 20.04.4 LTS
## 
## Matrix products: default
## BLAS:   /home/biocbuild/bbs-3.16-bioc/R/lib/libRblas.so
## LAPACK: /home/biocbuild/bbs-3.16-bioc/R/lib/libRlapack.so
## 
## locale:
##  [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
##  [3] LC_TIME=en_GB              LC_COLLATE=C              
##  [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
##  [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
##  [9] LC_ADDRESS=C               LC_TELEPHONE=C            
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
## 
## attached base packages:
## [1] stats4    stats     graphics  grDevices utils     datasets  methods  
## [8] base     
## 
## other attached packages:
##  [1] einsum_0.1.0             DelayedRandomArray_1.5.0 HDF5Array_1.25.0        
##  [4] rhdf5_2.41.0             DelayedArray_0.23.0      IRanges_2.31.0          
##  [7] S4Vectors_0.35.0         MatrixGenerics_1.9.0     matrixStats_0.62.0      
## [10] BiocGenerics_0.43.0      Matrix_1.4-1             DelayedTensor_1.3.0     
## [13] BiocStyle_2.25.0        
## 
## loaded via a namespace (and not attached):
##  [1] Rcpp_1.0.8.3        rTensor_1.4.8       bslib_0.3.1        
##  [4] compiler_4.2.0      BiocManager_1.30.17 jquerylib_0.1.4    
##  [7] rhdf5filters_1.9.0  tools_4.2.0         digest_0.6.29      
## [10] jsonlite_1.8.0      evaluate_0.15       lattice_0.20-45    
## [13] rlang_1.0.2         cli_3.3.0           parallel_4.2.0     
## [16] yaml_2.3.5          xfun_0.30           fastmap_1.1.0      
## [19] stringr_1.4.0       knitr_1.38          sass_0.4.1         
## [22] grid_4.2.0          R6_2.5.1            BiocParallel_1.31.0
## [25] rmarkdown_2.14      bookdown_0.26       irlba_2.3.5        
## [28] Rhdf5lib_1.19.0     magrittr_2.0.3      BiocSingular_1.13.0
## [31] htmltools_0.5.2     rsvd_1.0.5          beachmat_2.13.0    
## [34] dqrng_0.3.0         ScaledMatrix_1.5.0  stringi_1.7.6