[][src]Module core::arch::arm

🔬 This is a nightly-only experimental API. (stdsimd #27731)
This is supported on ARM only.

Platform-specific intrinsics for the arm platform.

See the module documentation for more details.

Structs

APSRExperimentalARM

Application Program Status Register

SYExperimentalARM

Full system is the required shareability domain, reads and writes are the required access types

float32x2_tExperimentalARM

ARM-specific 64-bit wide vector of two packed f32.

float32x4_tExperimentalARM

ARM-specific 128-bit wide vector of four packed f32.

int16x2_tExperimentalARM

ARM-specific 32-bit wide vector of two packed i16.

int16x4_tExperimentalARM

ARM-specific 64-bit wide vector of four packed i16.

int16x8_tExperimentalARM

ARM-specific 128-bit wide vector of eight packed i16.

int32x2_tExperimentalARM

ARM-specific 64-bit wide vector of two packed i32.

int32x4_tExperimentalARM

ARM-specific 128-bit wide vector of four packed i32.

int64x1_tExperimentalARM

ARM-specific 64-bit wide vector of one packed i64.

int64x2_tExperimentalARM

ARM-specific 128-bit wide vector of two packed i64.

int8x4_tExperimentalARM

ARM-specific 32-bit wide vector of four packed i8.

int8x8_tExperimentalARM

ARM-specific 64-bit wide vector of eight packed i8.

int8x16_tExperimentalARM

ARM-specific 128-bit wide vector of sixteen packed i8.

int8x8x2_tExperimentalARM

ARM-specific type containing two int8x8_t vectors.

int8x8x3_tExperimentalARM

ARM-specific type containing three int8x8_t vectors.

int8x8x4_tExperimentalARM

ARM-specific type containing four int8x8_t vectors.

poly16x4_tExperimentalARM

ARM-specific 64-bit wide vector of four packed u16.

poly16x8_tExperimentalARM

ARM-specific 128-bit wide vector of eight packed u16.

poly8x8_tExperimentalARM

ARM-specific 64-bit wide polynomial vector of eight packed u8.

poly8x16_tExperimentalARM

ARM-specific 128-bit wide vector of sixteen packed u8.

poly8x8x2_tExperimentalARM

ARM-specific type containing two poly8x8_t vectors.

poly8x8x3_tExperimentalARM

ARM-specific type containing three poly8x8_t vectors.

poly8x8x4_tExperimentalARM

ARM-specific type containing four poly8x8_t vectors.

uint16x2_tExperimentalARM

ARM-specific 32-bit wide vector of two packed u16.

uint16x4_tExperimentalARM

ARM-specific 64-bit wide vector of four packed u16.

uint16x8_tExperimentalARM

ARM-specific 128-bit wide vector of eight packed u16.

uint32x2_tExperimentalARM

ARM-specific 64-bit wide vector of two packed u32.

uint32x4_tExperimentalARM

ARM-specific 128-bit wide vector of four packed u32.

uint64x1_tExperimentalARM

ARM-specific 64-bit wide vector of one packed u64.

uint64x2_tExperimentalARM

ARM-specific 128-bit wide vector of two packed u64.

uint8x4_tExperimentalARM

ARM-specific 32-bit wide vector of four packed u8.

uint8x8_tExperimentalARM

ARM-specific 64-bit wide vector of eight packed u8.

uint8x16_tExperimentalARM

ARM-specific 128-bit wide vector of sixteen packed u8.

uint8x8x2_tExperimentalARM

ARM-specific type containing two uint8x8_t vectors.

uint8x8x3_tExperimentalARM

ARM-specific type containing three uint8x8_t vectors.

uint8x8x4_tExperimentalARM

ARM-specific type containing four uint8x8_t vectors.

Functions

__breakpointExperimentalARM

Inserts a breakpoint instruction.

__dmbExperimentalARM

Generates a DMB (data memory barrier) instruction or equivalent CP15 instruction.

__dsbExperimentalARM

Generates a DSB (data synchronization barrier) instruction or equivalent CP15 instruction.

__isbExperimentalARM

Generates an ISB (instruction synchronization barrier) instruction or equivalent CP15 instruction.

__ldrexExperimentalARM

Executes a exclusive LDR instruction for 32 bit value.

__nopExperimentalARM

Generates an unspecified no-op instruction.

__qaddExperimentalARM

Signed saturating addition

__qadd8ExperimentalARM

Saturating four 8-bit integer additions

__qadd16ExperimentalARM

Saturating two 16-bit integer additions

__qasxExperimentalARM

Returns the 16-bit signed saturated equivalent of

__qdblExperimentalARM

Insert a QADD instruction

__qsaxExperimentalARM

Returns the 16-bit signed saturated equivalent of

__qsubExperimentalARM

Signed saturating subtraction

__qsub8ExperimentalARM

Saturating two 8-bit integer subtraction

__qsub16ExperimentalARM

Saturating two 16-bit integer subtraction

__rsrExperimentalARM

Reads a 32-bit system register

__rsrpExperimentalARM

Reads a system register containing an address

__sadd8ExperimentalARM

Returns the 8-bit signed saturated equivalent of

__sadd16ExperimentalARM

Returns the 16-bit signed saturated equivalent of

__sasxExperimentalARM

Returns the 16-bit signed equivalent of

__selExperimentalARM

Select bytes from each operand according to APSR GE flags

__sevExperimentalARM

Generates a SEV (send a global event) hint instruction.

__shadd8ExperimentalARM

Signed halving parallel byte-wise addition.

__shadd16ExperimentalARM

Signed halving parallel halfword-wise addition.

__shsub8ExperimentalARM

Signed halving parallel byte-wise subtraction.

__shsub16ExperimentalARM

Signed halving parallel halfword-wise subtraction.

__smlabbExperimentalARM

Insert a SMLABB instruction

__smlabtExperimentalARM

Insert a SMLABT instruction

__smladExperimentalARM

Dual 16-bit Signed Multiply with Addition of products and 32-bit accumulation.

__smlatbExperimentalARM

Insert a SMLATB instruction

__smlattExperimentalARM

Insert a SMLATT instruction

__smlawbExperimentalARM

Insert a SMLAWB instruction

__smlawtExperimentalARM

Insert a SMLAWT instruction

__smlsdExperimentalARM

Dual 16-bit Signed Multiply with Subtraction of products and 32-bit accumulation and overflow detection.

__smuadExperimentalARM

Signed Dual Multiply Add.

__smuadxExperimentalARM

Signed Dual Multiply Add Reversed.

__smulbbExperimentalARM

Insert a SMULBB instruction

__smulbtExperimentalARM

Insert a SMULTB instruction

__smultbExperimentalARM

Insert a SMULTB instruction

__smulttExperimentalARM

Insert a SMULTT instruction

__smulwbExperimentalARM

Insert a SMULWB instruction

__smulwtExperimentalARM

Insert a SMULWT instruction

__smusdExperimentalARM

Signed Dual Multiply Subtract.

__smusdxExperimentalARM

Signed Dual Multiply Subtract Reversed.

__ssub8ExperimentalARM

Inserts a SSUB8 instruction.

__strexExperimentalARM

Executes a exclusive STR instruction for 32 bit values

__usad8ExperimentalARM

Sum of 8-bit absolute differences.

__usada8ExperimentalARM

Sum of 8-bit absolute differences and constant.

__usub8ExperimentalARM

Inserts a USUB8 instruction.

__wfeExperimentalARM

Generates a WFE (wait for event) hint instruction, or nothing.

__wfiExperimentalARM

Generates a WFI (wait for interrupt) hint instruction, or nothing.

__wsrExperimentalARM

Writes a 32-bit system register

__wsrpExperimentalARM

Writes a system register containing an address

__yieldExperimentalARM

Generates a YIELD hint instruction.

_rev_u16ExperimentalARM

Reverse the order of the bytes.

_rev_u32ExperimentalARM

Reverse the order of the bytes.

udfExperimentalARM

Generates the trap instruction UDF

vadd_f32Experimentalneon and v7 and ARM

Vector add.

vadd_s8Experimentalneon and v7 and ARM

Vector add.

vadd_s16Experimentalneon and v7 and ARM

Vector add.

vadd_s32Experimentalneon and v7 and ARM

Vector add.

vadd_u8Experimentalneon and v7 and ARM

Vector add.

vadd_u16Experimentalneon and v7 and ARM

Vector add.

vadd_u32Experimentalneon and v7 and ARM

Vector add.

vaddl_s8Experimentalneon and v7 and ARM

Vector long add.

vaddl_s16Experimentalneon and v7 and ARM

Vector long add.

vaddl_s32Experimentalneon and v7 and ARM

Vector long add.

vaddl_u8Experimentalneon and v7 and ARM

Vector long add.

vaddl_u16Experimentalneon and v7 and ARM

Vector long add.

vaddl_u32Experimentalneon and v7 and ARM

Vector long add.

vaddq_f32Experimentalneon and v7 and ARM

Vector add.

vaddq_s8Experimentalneon and v7 and ARM

Vector add.

vaddq_s16Experimentalneon and v7 and ARM

Vector add.

vaddq_s32Experimentalneon and v7 and ARM

Vector add.

vaddq_s64Experimentalneon and v7 and ARM

Vector add.

vaddq_u8Experimentalneon and v7 and ARM

Vector add.

vaddq_u16Experimentalneon and v7 and ARM

Vector add.

vaddq_u32Experimentalneon and v7 and ARM

Vector add.

vaddq_u64Experimentalneon and v7 and ARM

Vector add.

vand_s8Experimentalneon and v7 and ARM

Vector bitwise and

vand_s16Experimentalneon and v7 and ARM

Vector bitwise and

vand_s32Experimentalneon and v7 and ARM

Vector bitwise and

vand_s64Experimentalneon and v7 and ARM

Vector bitwise and

vand_u8Experimentalneon and v7 and ARM

Vector bitwise and

vand_u16Experimentalneon and v7 and ARM

Vector bitwise and

vand_u32Experimentalneon and v7 and ARM

Vector bitwise and

vand_u64Experimentalneon and v7 and ARM

Vector bitwise and

vandq_s8Experimentalneon and v7 and ARM

Vector bitwise and

vandq_s16Experimentalneon and v7 and ARM

Vector bitwise and

vandq_s32Experimentalneon and v7 and ARM

Vector bitwise and

vandq_s64Experimentalneon and v7 and ARM

Vector bitwise and

vandq_u8Experimentalneon and v7 and ARM

Vector bitwise and

vandq_u16Experimentalneon and v7 and ARM

Vector bitwise and

vandq_u32Experimentalneon and v7 and ARM

Vector bitwise and

vandq_u64Experimentalneon and v7 and ARM

Vector bitwise and

vceq_f32Experimentalneon and v7 and ARM

Floating-point compare equal

vceq_s8Experimentalneon and v7 and ARM

Compare bitwise Equal (vector)

vceq_s16Experimentalneon and v7 and ARM

Compare bitwise Equal (vector)

vceq_s32Experimentalneon and v7 and ARM

Compare bitwise Equal (vector)

vceq_u8Experimentalneon and v7 and ARM

Compare bitwise Equal (vector)

vceq_u16Experimentalneon and v7 and ARM

Compare bitwise Equal (vector)

vceq_u32Experimentalneon and v7 and ARM

Compare bitwise Equal (vector)

vceqq_f32Experimentalneon and v7 and ARM

Floating-point compare equal

vceqq_s8Experimentalneon and v7 and ARM

Compare bitwise Equal (vector)

vceqq_s16Experimentalneon and v7 and ARM

Compare bitwise Equal (vector)

vceqq_s32Experimentalneon and v7 and ARM

Compare bitwise Equal (vector)

vceqq_u8Experimentalneon and v7 and ARM

Compare bitwise Equal (vector)

vceqq_u16Experimentalneon and v7 and ARM

Compare bitwise Equal (vector)

vceqq_u32Experimentalneon and v7 and ARM

Compare bitwise Equal (vector)

vcge_f32Experimentalneon and v7 and ARM

Floating-point compare greater than or equal

vcge_s8Experimentalneon and v7 and ARM

Compare signed greater than or equal

vcge_s16Experimentalneon and v7 and ARM

Compare signed greater than or equal

vcge_s32Experimentalneon and v7 and ARM

Compare signed greater than or equal

vcge_u8Experimentalneon and v7 and ARM

Compare unsigned greater than or equal

vcge_u16Experimentalneon and v7 and ARM

Compare unsigned greater than or equal

vcge_u32Experimentalneon and v7 and ARM

Compare unsigned greater than or equal

vcgeq_f32Experimentalneon and v7 and ARM

Floating-point compare greater than or equal

vcgeq_s8Experimentalneon and v7 and ARM

Compare signed greater than or equal

vcgeq_s16Experimentalneon and v7 and ARM

Compare signed greater than or equal

vcgeq_s32Experimentalneon and v7 and ARM

Compare signed greater than or equal

vcgeq_u8Experimentalneon and v7 and ARM

Compare unsigned greater than or equal

vcgeq_u16Experimentalneon and v7 and ARM

Compare unsigned greater than or equal

vcgeq_u32Experimentalneon and v7 and ARM

Compare unsigned greater than or equal

vcgt_f32Experimentalneon and v7 and ARM

Floating-point compare greater than

vcgt_s8Experimentalneon and v7 and ARM

Compare signed greater than

vcgt_s16Experimentalneon and v7 and ARM

Compare signed greater than

vcgt_s32Experimentalneon and v7 and ARM

Compare signed greater than

vcgt_u8Experimentalneon and v7 and ARM

Compare unsigned highe

vcgt_u16Experimentalneon and v7 and ARM

Compare unsigned highe

vcgt_u32Experimentalneon and v7 and ARM

Compare unsigned highe

vcgtq_f32Experimentalneon and v7 and ARM

Floating-point compare greater than

vcgtq_s8Experimentalneon and v7 and ARM

Compare signed greater than

vcgtq_s16Experimentalneon and v7 and ARM

Compare signed greater than

vcgtq_s32Experimentalneon and v7 and ARM

Compare signed greater than

vcgtq_u8Experimentalneon and v7 and ARM

Compare unsigned highe

vcgtq_u16Experimentalneon and v7 and ARM

Compare unsigned highe

vcgtq_u32Experimentalneon and v7 and ARM

Compare unsigned highe

vcle_f32Experimentalneon and v7 and ARM

Floating-point compare less than or equal

vcle_s8Experimentalneon and v7 and ARM

Compare signed less than or equal

vcle_s16Experimentalneon and v7 and ARM

Compare signed less than or equal

vcle_s32Experimentalneon and v7 and ARM

Compare signed less than or equal

vcle_u8Experimentalneon and v7 and ARM

Compare unsigned less than or equal

vcle_u16Experimentalneon and v7 and ARM

Compare unsigned less than or equal

vcle_u32Experimentalneon and v7 and ARM

Compare unsigned less than or equal

vcleq_f32Experimentalneon and v7 and ARM

Floating-point compare less than or equal

vcleq_s8Experimentalneon and v7 and ARM

Compare signed less than or equal

vcleq_s16Experimentalneon and v7 and ARM

Compare signed less than or equal

vcleq_s32Experimentalneon and v7 and ARM

Compare signed less than or equal

vcleq_u8Experimentalneon and v7 and ARM

Compare unsigned less than or equal

vcleq_u16Experimentalneon and v7 and ARM

Compare unsigned less than or equal

vcleq_u32Experimentalneon and v7 and ARM

Compare unsigned less than or equal

vclt_f32Experimentalneon and v7 and ARM

Floating-point compare less than

vclt_s8Experimentalneon and v7 and ARM

Compare signed less than

vclt_s16Experimentalneon and v7 and ARM

Compare signed less than

vclt_s32Experimentalneon and v7 and ARM

Compare signed less than

vclt_u8Experimentalneon and v7 and ARM

Compare unsigned less than

vclt_u16Experimentalneon and v7 and ARM

Compare unsigned less than

vclt_u32Experimentalneon and v7 and ARM

Compare unsigned less than

vcltq_f32Experimentalneon and v7 and ARM

Floating-point compare less than

vcltq_s8Experimentalneon and v7 and ARM

Compare signed less than

vcltq_s16Experimentalneon and v7 and ARM

Compare signed less than

vcltq_s32Experimentalneon and v7 and ARM

Compare signed less than

vcltq_u8Experimentalneon and v7 and ARM

Compare unsigned less than

vcltq_u16Experimentalneon and v7 and ARM

Compare unsigned less than

vcltq_u32Experimentalneon and v7 and ARM

Compare unsigned less than

vdupq_n_s8Experimentalneon and v7 and ARM

Duplicate vector element to vector or scalar

vdupq_n_u8Experimentalneon and v7 and ARM

Duplicate vector element to vector or scalar

veor_s8Experimentalneon and v7 and ARM

Vector bitwise exclusive or (vector)

veor_s16Experimentalneon and v7 and ARM

Vector bitwise exclusive or (vector)

veor_s32Experimentalneon and v7 and ARM

Vector bitwise exclusive or (vector)

veor_s64Experimentalneon and v7 and ARM

Vector bitwise exclusive or (vector)

veor_u8Experimentalneon and v7 and ARM

Vector bitwise exclusive or (vector)

veor_u16Experimentalneon and v7 and ARM

Vector bitwise exclusive or (vector)

veor_u32Experimentalneon and v7 and ARM

Vector bitwise exclusive or (vector)

veor_u64Experimentalneon and v7 and ARM

Vector bitwise exclusive or (vector)

veorq_s8Experimentalneon and v7 and ARM

Vector bitwise exclusive or (vector)

veorq_s16Experimentalneon and v7 and ARM

Vector bitwise exclusive or (vector)

veorq_s32Experimentalneon and v7 and ARM

Vector bitwise exclusive or (vector)

veorq_s64Experimentalneon and v7 and ARM

Vector bitwise exclusive or (vector)

veorq_u8Experimentalneon and v7 and ARM

Vector bitwise exclusive or (vector)

veorq_u16Experimentalneon and v7 and ARM

Vector bitwise exclusive or (vector)

veorq_u32Experimentalneon and v7 and ARM

Vector bitwise exclusive or (vector)

veorq_u64Experimentalneon and v7 and ARM

Vector bitwise exclusive or (vector)

vextq_s8Experimentalneon and v7 and ARM

Extract vector from pair of vectors

vextq_u8Experimentalneon and v7 and ARM

Extract vector from pair of vectors

vget_lane_u8Experimentalneon and v7 and ARM

Move vector element to general-purpose register

vget_lane_u64Experimentalneon and v7 and ARM

Move vector element to general-purpose register

vgetq_lane_u16Experimentalneon and v7 and ARM

Move vector element to general-purpose register

vgetq_lane_u32Experimentalneon and v7 and ARM

Move vector element to general-purpose register

vgetq_lane_u64Experimentalneon and v7 and ARM

Move vector element to general-purpose register

vhadd_s8Experimentalneon and v7 and ARM

Halving add

vhadd_s16Experimentalneon and v7 and ARM

Halving add

vhadd_s32Experimentalneon and v7 and ARM

Halving add

vhadd_u8Experimentalneon and v7 and ARM

Halving add

vhadd_u16Experimentalneon and v7 and ARM

Halving add

vhadd_u32Experimentalneon and v7 and ARM

Halving add

vhaddq_s8Experimentalneon and v7 and ARM

Halving add

vhaddq_s16Experimentalneon and v7 and ARM

Halving add

vhaddq_s32Experimentalneon and v7 and ARM

Halving add

vhaddq_u8Experimentalneon and v7 and ARM

Halving add

vhaddq_u16Experimentalneon and v7 and ARM

Halving add

vhaddq_u32Experimentalneon and v7 and ARM

Halving add

vhsub_s8Experimentalneon and v7 and ARM

Signed halving subtract

vhsub_s16Experimentalneon and v7 and ARM

Signed halving subtract

vhsub_s32Experimentalneon and v7 and ARM

Signed halving subtract

vhsub_u8Experimentalneon and v7 and ARM

Signed halving subtract

vhsub_u16Experimentalneon and v7 and ARM

Signed halving subtract

vhsub_u32Experimentalneon and v7 and ARM

Signed halving subtract

vhsubq_s8Experimentalneon and v7 and ARM

Signed halving subtract

vhsubq_s16Experimentalneon and v7 and ARM

Signed halving subtract

vhsubq_s32Experimentalneon and v7 and ARM

Signed halving subtract

vhsubq_u8Experimentalneon and v7 and ARM

Signed halving subtract

vhsubq_u16Experimentalneon and v7 and ARM

Signed halving subtract

vhsubq_u32Experimentalneon and v7 and ARM

Signed halving subtract

vld1q_s8Experimentalneon and v7 and ARM

Load multiple single-element structures to one, two, three, or four registers

vld1q_u8Experimentalneon and v7 and ARM

Load multiple single-element structures to one, two, three, or four registers

vmovl_s8Experimentalneon and v7 and ARM

Vector long move.

vmovl_s16Experimentalneon and v7 and ARM

Vector long move.

vmovl_s32Experimentalneon and v7 and ARM

Vector long move.

vmovl_u8Experimentalneon and v7 and ARM

Vector long move.

vmovl_u16Experimentalneon and v7 and ARM

Vector long move.

vmovl_u32Experimentalneon and v7 and ARM

Vector long move.

vmovn_s16Experimentalneon and v7 and ARM

Vector narrow integer.

vmovn_s32Experimentalneon and v7 and ARM

Vector narrow integer.

vmovn_s64Experimentalneon and v7 and ARM

Vector narrow integer.

vmovn_u16Experimentalneon and v7 and ARM

Vector narrow integer.

vmovn_u32Experimentalneon and v7 and ARM

Vector narrow integer.

vmovn_u64Experimentalneon and v7 and ARM

Vector narrow integer.

vmovq_n_u8Experimentalneon and v7 and ARM

Duplicate vector element to vector or scalar

vmul_f32Experimentalneon and v7 and ARM

Multiply

vmul_s8Experimentalneon and v7 and ARM

Multiply

vmul_s16Experimentalneon and v7 and ARM

Multiply

vmul_s32Experimentalneon and v7 and ARM

Multiply

vmul_u8Experimentalneon and v7 and ARM

Multiply

vmul_u16Experimentalneon and v7 and ARM

Multiply

vmul_u32Experimentalneon and v7 and ARM

Multiply

vmulq_f32Experimentalneon and v7 and ARM

Multiply

vmulq_s8Experimentalneon and v7 and ARM

Multiply

vmulq_s16Experimentalneon and v7 and ARM

Multiply

vmulq_s32Experimentalneon and v7 and ARM

Multiply

vmulq_u8Experimentalneon and v7 and ARM

Multiply

vmulq_u16Experimentalneon and v7 and ARM

Multiply

vmulq_u32Experimentalneon and v7 and ARM

Multiply

vmvn_p8Experimentalneon and v7 and ARM

Vector bitwise not.

vmvn_s8Experimentalneon and v7 and ARM

Vector bitwise not.

vmvn_s16Experimentalneon and v7 and ARM

Vector bitwise not.

vmvn_s32Experimentalneon and v7 and ARM

Vector bitwise not.

vmvn_u8Experimentalneon and v7 and ARM

Vector bitwise not.

vmvn_u16Experimentalneon and v7 and ARM

Vector bitwise not.

vmvn_u32Experimentalneon and v7 and ARM

Vector bitwise not.

vmvnq_p8Experimentalneon and v7 and ARM

Vector bitwise not.

vmvnq_s8Experimentalneon and v7 and ARM

Vector bitwise not.

vmvnq_s16Experimentalneon and v7 and ARM

Vector bitwise not.

vmvnq_s32Experimentalneon and v7 and ARM

Vector bitwise not.

vmvnq_u8Experimentalneon and v7 and ARM

Vector bitwise not.

vmvnq_u16Experimentalneon and v7 and ARM

Vector bitwise not.

vmvnq_u32Experimentalneon and v7 and ARM

Vector bitwise not.

vorr_s8Experimentalneon and v7 and ARM

Vector bitwise or (immediate, inclusive)

vorr_s16Experimentalneon and v7 and ARM

Vector bitwise or (immediate, inclusive)

vorr_s32Experimentalneon and v7 and ARM

Vector bitwise or (immediate, inclusive)

vorr_s64Experimentalneon and v7 and ARM

Vector bitwise or (immediate, inclusive)

vorr_u8Experimentalneon and v7 and ARM

Vector bitwise or (immediate, inclusive)

vorr_u16Experimentalneon and v7 and ARM

Vector bitwise or (immediate, inclusive)

vorr_u32Experimentalneon and v7 and ARM

Vector bitwise or (immediate, inclusive)

vorr_u64Experimentalneon and v7 and ARM

Vector bitwise or (immediate, inclusive)

vorrq_s8Experimentalneon and v7 and ARM

Vector bitwise or (immediate, inclusive)

vorrq_s16Experimentalneon and v7 and ARM

Vector bitwise or (immediate, inclusive)

vorrq_s32Experimentalneon and v7 and ARM

Vector bitwise or (immediate, inclusive)

vorrq_s64Experimentalneon and v7 and ARM

Vector bitwise or (immediate, inclusive)

vorrq_u8Experimentalneon and v7 and ARM

Vector bitwise or (immediate, inclusive)

vorrq_u16Experimentalneon and v7 and ARM

Vector bitwise or (immediate, inclusive)

vorrq_u32Experimentalneon and v7 and ARM

Vector bitwise or (immediate, inclusive)

vorrq_u64Experimentalneon and v7 and ARM

Vector bitwise or (immediate, inclusive)

vpmax_f32Experimentalneon and v7 and ARM

Folding maximum of adjacent pairs

vpmax_s8Experimentalneon and v7 and ARM

Folding maximum of adjacent pairs

vpmax_s16Experimentalneon and v7 and ARM

Folding maximum of adjacent pairs

vpmax_s32Experimentalneon and v7 and ARM

Folding maximum of adjacent pairs

vpmax_u8Experimentalneon and v7 and ARM

Folding maximum of adjacent pairs

vpmax_u16Experimentalneon and v7 and ARM

Folding maximum of adjacent pairs

vpmax_u32Experimentalneon and v7 and ARM

Folding maximum of adjacent pairs

vpmin_f32Experimentalneon and v7 and ARM

Folding minimum of adjacent pairs

vpmin_s8Experimentalneon and v7 and ARM

Folding minimum of adjacent pairs

vpmin_s16Experimentalneon and v7 and ARM

Folding minimum of adjacent pairs

vpmin_s32Experimentalneon and v7 and ARM

Folding minimum of adjacent pairs

vpmin_u8Experimentalneon and v7 and ARM

Folding minimum of adjacent pairs

vpmin_u16Experimentalneon and v7 and ARM

Folding minimum of adjacent pairs

vpmin_u32Experimentalneon and v7 and ARM

Folding minimum of adjacent pairs

vqadd_s8Experimentalneon and v7 and ARM

Saturating add

vqadd_s16Experimentalneon and v7 and ARM

Saturating add

vqadd_s32Experimentalneon and v7 and ARM

Saturating add

vqadd_u8Experimentalneon and v7 and ARM

Saturating add

vqadd_u16Experimentalneon and v7 and ARM

Saturating add

vqadd_u32Experimentalneon and v7 and ARM

Saturating add

vqaddq_s8Experimentalneon and v7 and ARM

Saturating add

vqaddq_s16Experimentalneon and v7 and ARM

Saturating add

vqaddq_s32Experimentalneon and v7 and ARM

Saturating add

vqaddq_u8Experimentalneon and v7 and ARM

Saturating add

vqaddq_u16Experimentalneon and v7 and ARM

Saturating add

vqaddq_u32Experimentalneon and v7 and ARM

Saturating add

vqmovn_u64Experimentalneon and v7 and ARM

Unsigned saturating extract narrow.

vqsub_s8Experimentalneon and v7 and ARM

Saturating subtract

vqsub_s16Experimentalneon and v7 and ARM

Saturating subtract

vqsub_s32Experimentalneon and v7 and ARM

Saturating subtract

vqsub_u8Experimentalneon and v7 and ARM

Saturating subtract

vqsub_u16Experimentalneon and v7 and ARM

Saturating subtract

vqsub_u32Experimentalneon and v7 and ARM

Saturating subtract

vqsubq_s8Experimentalneon and v7 and ARM

Saturating subtract

vqsubq_s16Experimentalneon and v7 and ARM

Saturating subtract

vqsubq_s32Experimentalneon and v7 and ARM

Saturating subtract

vqsubq_u8Experimentalneon and v7 and ARM

Saturating subtract

vqsubq_u16Experimentalneon and v7 and ARM

Saturating subtract

vqsubq_u32Experimentalneon and v7 and ARM

Saturating subtract

vreinterpret_u64_u32Experimentalneon and v7 and ARM

Vector reinterpret cast operation

vreinterpretq_s8_u8Experimentalneon and v7 and ARM

Vector reinterpret cast operation

vreinterpretq_u16_u8Experimentalneon and v7 and ARM

Vector reinterpret cast operation

vreinterpretq_u32_u8Experimentalneon and v7 and ARM

Vector reinterpret cast operation

vreinterpretq_u64_u8Experimentalneon and v7 and ARM

Vector reinterpret cast operation

vreinterpretq_u8_s8Experimentalneon and v7 and ARM

Vector reinterpret cast operation

vrhadd_s8Experimentalneon and v7 and ARM

Rounding halving add

vrhadd_s16Experimentalneon and v7 and ARM

Rounding halving add

vrhadd_s32Experimentalneon and v7 and ARM

Rounding halving add

vrhadd_u8Experimentalneon and v7 and ARM

Rounding halving add

vrhadd_u16Experimentalneon and v7 and ARM

Rounding halving add

vrhadd_u32Experimentalneon and v7 and ARM

Rounding halving add

vrhaddq_s8Experimentalneon and v7 and ARM

Rounding halving add

vrhaddq_s16Experimentalneon and v7 and ARM

Rounding halving add

vrhaddq_s32Experimentalneon and v7 and ARM

Rounding halving add

vrhaddq_u8Experimentalneon and v7 and ARM

Rounding halving add

vrhaddq_u16Experimentalneon and v7 and ARM

Rounding halving add

vrhaddq_u32Experimentalneon and v7 and ARM

Rounding halving add

vrsqrte_f32ExperimentalARM and neon

Reciprocal square-root estimate.

vshlq_n_u8Experimentalneon and v7 and ARM

Shift right

vshrq_n_u8Experimentalneon and v7 and ARM

Unsigned shift right

vsub_f32Experimentalneon and v7 and ARM

Subtract

vsub_s8Experimentalneon and v7 and ARM

Subtract

vsub_s16Experimentalneon and v7 and ARM

Subtract

vsub_s32Experimentalneon and v7 and ARM

Subtract

vsub_s64Experimentalneon and v7 and ARM

Subtract

vsub_u8Experimentalneon and v7 and ARM

Subtract

vsub_u16Experimentalneon and v7 and ARM

Subtract

vsub_u32Experimentalneon and v7 and ARM

Subtract

vsub_u64Experimentalneon and v7 and ARM

Subtract

vsubq_f32Experimentalneon and v7 and ARM

Subtract

vsubq_s8Experimentalneon and v7 and ARM

Subtract

vsubq_s16Experimentalneon and v7 and ARM

Subtract

vsubq_s32Experimentalneon and v7 and ARM

Subtract

vsubq_s64Experimentalneon and v7 and ARM

Subtract

vsubq_u8Experimentalneon and v7 and ARM

Subtract

vsubq_u16Experimentalneon and v7 and ARM

Subtract

vsubq_u32Experimentalneon and v7 and ARM

Subtract

vsubq_u64Experimentalneon and v7 and ARM

Subtract

vtbl1_p8ExperimentalARM and neon,v7

Table look-up

vtbl1_s8ExperimentalARM and neon,v7

Table look-up

vtbl1_u8ExperimentalARM and neon,v7

Table look-up

vtbl2_p8ExperimentalARM and neon,v7

Table look-up

vtbl2_s8ExperimentalARM and neon,v7

Table look-up

vtbl2_u8ExperimentalARM and neon,v7

Table look-up

vtbl3_p8ExperimentalARM and neon,v7

Table look-up

vtbl3_s8ExperimentalARM and neon,v7

Table look-up

vtbl3_u8ExperimentalARM and neon,v7

Table look-up

vtbl4_p8ExperimentalARM and neon,v7

Table look-up

vtbl4_s8ExperimentalARM and neon,v7

Table look-up

vtbl4_u8ExperimentalARM and neon,v7

Table look-up

vtbx1_p8ExperimentalARM and neon,v7

Extended table look-up

vtbx1_s8ExperimentalARM and neon,v7

Extended table look-up

vtbx1_u8ExperimentalARM and neon,v7

Extended table look-up

vtbx2_p8ExperimentalARM and neon,v7

Extended table look-up

vtbx2_s8ExperimentalARM and neon,v7

Extended table look-up

vtbx2_u8ExperimentalARM and neon,v7

Extended table look-up

vtbx3_p8ExperimentalARM and neon,v7

Extended table look-up

vtbx3_s8ExperimentalARM and neon,v7

Extended table look-up

vtbx3_u8ExperimentalARM and neon,v7

Extended table look-up

vtbx4_p8ExperimentalARM and neon,v7

Extended table look-up

vtbx4_s8ExperimentalARM and neon,v7

Extended table look-up

vtbx4_u8ExperimentalARM and neon,v7

Extended table look-up