@@ -392,23 +392,15 @@ is determined by the dtypes of the inputs. This is different
392392from NumPy's rule on type promotion, when operands contain
393393zero-dimensional arrays. Zero-dimensional numpy.ndarray
394394are treated as if they were scalar values if they appear
395- in operands of NumPy's function, This may affect the dtype
395+ in operands of NumPy's function. This may affect the dtype
396396of its output, depending on the values of the "scalar" inputs.
397397
398398```
399399>>> (np.array(3, dtype=np.int32) * np.array([1., 2.], dtype=np.float32)).dtype
400400dtype('float32')
401401
402- >>> (np.array(300000, dtype=np.int32) * np.array([1., 2.], dtype=np.float32)).dtype
403- dtype('float64')
404-
405402>>> (cp.array(3, dtype=np.int32) * cp.array([1., 2.], dtype=np.float32)).dtype
406403dtype('float64')
407-
408- ################## FIXME: example not working
409- >>> (np.array(3, dtype=np.int32) * np.array([1., 2.], dtype=np.float32)).dtype
410- dtype('float64')
411-
412404```
413405
414406### Matrix type (numpy.matrix)
@@ -447,44 +439,47 @@ TypeError: Unsupported type <class 'list'>
447439
448440### Random seed arrays are hashed to scalars
449441
450- Like Numpy , CuPy's RandomState objects accept seeds
451- either as numbers or as full numpy arrays.
442+ Like NumPy , CuPy's RandomState objects accept seeds
443+ either as numbers or as full NumPy arrays.
452444
453- However, unlike Numpy, array seeds will be hashed down
454- to a single number and so may not communicate as much entropy
455- to the underlying random number generator.
445+ However, unlike NumPy, array seeds will be hashed down
446+ to a single number of 64 bits. In contrast, NumPy often
447+ converts the seeds into a larger state space of 128 bits.
448+ Therefore, CuPy's implementation may not communicate as
449+ much entropy to the underlying random number generator.
456450
451+ <!--
457452```
458453>>> seed = np.array([1, 2, 3, 4, 5])
459454>>> rs = cp.random.RandomState(seed=seed)
460455```
456+ -->
461457
462458### NaN (not-a-number) handling
463459
464- By default CuPy's reduction functions (e.g., cupy.sum())
460+ Prior to CuPy v11, CuPy's reduction functions (e.g., cupy.sum())
465461handle NaNs in complex numbers differently from NumPy's counterparts:
466462
467463```
468- ################## FIXME: example not working
469464>>> a = [0.5 + 3.7j, complex(0.7, np.nan), complex(np.nan, -3.9), complex(np.nan, np.nan)]
470465>>> a
471466[(0.5+3.7j), (0.7+nanj), (nan-3.9j), (nan+nanj)]
472-
467+ >>>
473468>>> a_np = np.asarray(a)
474469>>> print(a_np.max(), a_np.min())
475470(0.7+nanj) (0.7+nanj)
476-
471+ >>>
477472>>> a_cp = cp.asarray(a_np)
478473>>> print(a_cp.max(), a_cp.min())
479- (0.7+nanj ) (0.7+nanj )
474+ (nan-3.9j ) (nan-3.9j )
480475```
481476
482477The reason is that internally the reduction is performed
483478in a strided fashion, thus it does not ensure a
484479proper comparison order and cannot follow NumPy's rule
485480to always propagate the first-encountered NaN.
486- Note that this difference does not apply when CUB
487- is enabled ( which is the default for CuPy v11 or later.)
481+ This difference does not apply when CUB library is enabled
482+ which is the default for CuPy v11 and later.
488483
489484### Contiguity / Strides
490485
0 commit comments