tilelang.carver.arch.cuda¶
Classes¶
Functions¶
|
|
|
|
|
|
|
|
|
|
|
|
|
Module Contents¶
- tilelang.carver.arch.cuda.is_cuda_arch(arch)¶
- Parameters:
- Return type:
bool
- tilelang.carver.arch.cuda.is_volta_arch(arch)¶
- Parameters:
- Return type:
bool
- tilelang.carver.arch.cuda.is_ampere_arch(arch)¶
- Parameters:
- Return type:
bool
- tilelang.carver.arch.cuda.is_ada_arch(arch)¶
- Parameters:
- Return type:
bool
- tilelang.carver.arch.cuda.is_hopper_arch(arch)¶
- Parameters:
- Return type:
bool
- tilelang.carver.arch.cuda.has_mma_support(arch)¶
- Parameters:
- Return type:
bool
- tilelang.carver.arch.cuda.is_tensorcore_supported_precision(in_dtype, accum_dtype, arch)¶
- Parameters:
in_dtype (str)
accum_dtype (str)
- Return type:
bool
- class tilelang.carver.arch.cuda.CUDA(target)¶
Bases:
tilelang.carver.arch.arch_base.TileDevice- Parameters:
target (tvm.target.Target | str)
- target¶
- sm_version¶
- name¶
- device: tvm.runtime.Device¶
- platform: str = 'CUDA'¶
- smem_cap¶
- compute_max_core¶
- warp_size¶
- compute_capability¶
- reg_cap: int = 65536¶
- max_smem_usage: int¶
- sm_partition: int = 4¶
- l2_cache_size_bytes: int¶
- transaction_size: list[int] = [32, 128]¶
- bandwidth: list[int] = [750, 12080]¶
- available_tensor_instructions: list[TensorInstruction] = None¶
- get_avaliable_tensorintrin_shapes()¶
- __repr__()¶