API¶
Dataframe¶
DataFrame (dsk, name, meta, divisions) |
Parallel Pandas DataFrame |
DataFrame.add (other[, axis, level, fill_value]) |
Get Addition of dataframe and other, element-wise (binary operator add). |
DataFrame.append (other[, interleave_partitions]) |
Append rows of other to the end of caller, returning a new object. |
DataFrame.apply (func[, axis, broadcast, …]) |
Parallel version of pandas.DataFrame.apply |
DataFrame.assign (**kwargs) |
Assign new columns to a DataFrame. |
DataFrame.astype (dtype) |
Cast a pandas object to a specified dtype dtype . |
DataFrame.categorize ([columns, index, …]) |
Convert columns of the DataFrame to category dtype. |
DataFrame.columns |
|
DataFrame.compute (**kwargs) |
Compute this dask collection |
DataFrame.corr ([method, min_periods, …]) |
Compute pairwise correlation of columns, excluding NA/null values. |
DataFrame.count ([axis, split_every]) |
Count non-NA cells for each column or row. |
DataFrame.cov ([min_periods, split_every]) |
Compute pairwise covariance of columns, excluding NA/null values. |
DataFrame.cummax ([axis, skipna, out]) |
Return cumulative maximum over a DataFrame or Series axis. |
DataFrame.cummin ([axis, skipna, out]) |
Return cumulative minimum over a DataFrame or Series axis. |
DataFrame.cumprod ([axis, skipna, dtype, out]) |
Return cumulative product over a DataFrame or Series axis. |
DataFrame.cumsum ([axis, skipna, dtype, out]) |
Return cumulative sum over a DataFrame or Series axis. |
DataFrame.describe ([split_every, …]) |
Generate descriptive statistics that summarize the central tendency, dispersion and shape of a dataset’s distribution, excluding NaN values. |
DataFrame.div (other[, axis, level, fill_value]) |
Get Floating division of dataframe and other, element-wise (binary operator truediv). |
DataFrame.drop ([labels, axis, columns, errors]) |
Drop specified labels from rows or columns. |
DataFrame.drop_duplicates ([subset, …]) |
Return DataFrame with duplicate rows removed, optionally only considering certain columns. |
DataFrame.dropna ([how, subset, thresh]) |
Remove missing values. |
DataFrame.dtypes |
Return data types |
DataFrame.explode (column) |
Transform each element of a list-like to a row, replicating the index values. |
DataFrame.fillna ([value, method, limit, axis]) |
Fill NA/NaN values using the specified method. |
DataFrame.floordiv (other[, axis, level, …]) |
Get Integer division of dataframe and other, element-wise (binary operator floordiv). |
DataFrame.get_partition (n) |
Get a dask DataFrame/Series representing the nth partition. |
DataFrame.groupby ([by]) |
Group DataFrame or Series using a mapper or by a Series of columns. |
DataFrame.head ([n, npartitions, compute]) |
First n rows of the dataset |
DataFrame.iloc |
Purely integer-location based indexing for selection by position. |
DataFrame.index |
Return dask Index instance |
DataFrame.isna () |
Detect missing values. |
DataFrame.isnull () |
Detect missing values. |
DataFrame.iterrows () |
Iterate over DataFrame rows as (index, Series) pairs. |
DataFrame.itertuples ([index, name]) |
Iterate over DataFrame rows as namedtuples. |
DataFrame.join (other[, on, how, lsuffix, …]) |
Join columns of another DataFrame. |
DataFrame.known_divisions |
Whether divisions are already known |
DataFrame.loc |
Purely label-location based indexer for selection by label. |
DataFrame.map_partitions (func, *args, **kwargs) |
Apply Python function on each DataFrame partition. |
DataFrame.mask (cond[, other]) |
Replace values where the condition is True. |
DataFrame.max ([axis, skipna, split_every, out]) |
Return the maximum of the values for the requested axis. |
DataFrame.mean ([axis, skipna, split_every, …]) |
Return the mean of the values for the requested axis. |
DataFrame.merge (right[, how, on, left_on, …]) |
Merge the DataFrame with another DataFrame |
DataFrame.min ([axis, skipna, split_every, out]) |
Return the minimum of the values for the requested axis. |
DataFrame.mod (other[, axis, level, fill_value]) |
Get Modulo of dataframe and other, element-wise (binary operator mod). |
DataFrame.mul (other[, axis, level, fill_value]) |
Get Multiplication of dataframe and other, element-wise (binary operator mul). |
DataFrame.ndim |
Return dimensionality |
DataFrame.nlargest ([n, columns, split_every]) |
Return the first n rows ordered by columns in descending order. |
DataFrame.npartitions |
Return number of partitions |
DataFrame.partitions |
Slice dataframe by partitions |
DataFrame.pop (item) |
Return item and drop from frame. |
DataFrame.pow (other[, axis, level, fill_value]) |
Get Exponential power of dataframe and other, element-wise (binary operator pow). |
DataFrame.prod ([axis, skipna, split_every, …]) |
Return the product of the values for the requested axis. |
DataFrame.quantile ([q, axis, method]) |
Approximate row-wise and precise column-wise quantiles of DataFrame |
DataFrame.query (expr, **kwargs) |
Filter dataframe with complex expression |
DataFrame.radd (other[, axis, level, fill_value]) |
Get Addition of dataframe and other, element-wise (binary operator radd). |
DataFrame.random_split (frac[, random_state]) |
Pseudorandomly split dataframe into different pieces row-wise |
DataFrame.rdiv (other[, axis, level, fill_value]) |
Get Floating division of dataframe and other, element-wise (binary operator rtruediv). |
DataFrame.rename ([index, columns]) |
Alter axes labels. |
DataFrame.repartition ([divisions, …]) |
Repartition dataframe along new divisions |
DataFrame.replace ([to_replace, value, regex]) |
Replace values given in to_replace with value. |
DataFrame.reset_index ([drop]) |
Reset the index to the default index. |
DataFrame.rfloordiv (other[, axis, level, …]) |
Get Integer division of dataframe and other, element-wise (binary operator rfloordiv). |
DataFrame.rmod (other[, axis, level, fill_value]) |
Get Modulo of dataframe and other, element-wise (binary operator rmod). |
DataFrame.rmul (other[, axis, level, fill_value]) |
Get Multiplication of dataframe and other, element-wise (binary operator rmul). |
DataFrame.rpow (other[, axis, level, fill_value]) |
Get Exponential power of dataframe and other, element-wise (binary operator rpow). |
DataFrame.rsub (other[, axis, level, fill_value]) |
Get Subtraction of dataframe and other, element-wise (binary operator rsub). |
DataFrame.rtruediv (other[, axis, level, …]) |
Get Floating division of dataframe and other, element-wise (binary operator rtruediv). |
DataFrame.sample ([n, frac, replace, …]) |
Random sample of items |
DataFrame.set_index (other[, drop, sorted, …]) |
Set the DataFrame index (row labels) using an existing column. |
DataFrame.shape |
Return a tuple representing the dimensionality of the DataFrame. |
DataFrame.std ([axis, skipna, ddof, …]) |
Return sample standard deviation over requested axis. |
DataFrame.sub (other[, axis, level, fill_value]) |
Get Subtraction of dataframe and other, element-wise (binary operator sub). |
DataFrame.sum ([axis, skipna, split_every, …]) |
Return the sum of the values for the requested axis. |
DataFrame.tail ([n, compute]) |
Last n rows of the dataset |
DataFrame.to_bag ([index]) |
Create Dask Bag from a Dask DataFrame |
DataFrame.to_csv (filename, **kwargs) |
Store Dask DataFrame to CSV files |
DataFrame.to_dask_array ([lengths]) |
Convert a dask DataFrame to a dask array. |
DataFrame.to_delayed ([optimize_graph]) |
Convert into a list of dask.delayed objects, one per partition. |
DataFrame.to_hdf (path_or_buf, key[, mode, …]) |
Store Dask Dataframe to Hierarchical Data Format (HDF) files |
DataFrame.to_json (filename, *args, **kwargs) |
See dd.to_json docstring for more information |
DataFrame.to_parquet (path, *args, **kwargs) |
Store Dask.dataframe to Parquet files |
DataFrame.to_records ([index, lengths]) |
Create Dask Array from a Dask Dataframe |
DataFrame.truediv (other[, axis, level, …]) |
Get Floating division of dataframe and other, element-wise (binary operator truediv). |
DataFrame.values |
Return a dask.array of the values of this dataframe |
DataFrame.var ([axis, skipna, ddof, …]) |
Return unbiased variance over requested axis. |
DataFrame.visualize ([filename, format, …]) |
Render the computation of this object’s task graph using graphviz. |
DataFrame.where (cond[, other]) |
Replace values where the condition is False. |
Series¶
Series (dsk, name, meta, divisions) |
Parallel Pandas Series |
Series.add (other[, level, fill_value, axis]) |
Return Addition of series and other, element-wise (binary operator add). |
Series.align (other[, join, axis, fill_value]) |
Align two objects on their axes with the specified join method for each axis Index. |
Series.all ([axis, skipna, split_every, out]) |
Return whether all elements are True, potentially over an axis. |
Series.any ([axis, skipna, split_every, out]) |
Return whether any element is True, potentially over an axis. |
Series.append (other[, interleave_partitions]) |
Concatenate two or more Series. |
Series.apply (func[, convert_dtype, meta, args]) |
Parallel version of pandas.Series.apply |
Series.astype (dtype) |
Cast a pandas object to a specified dtype dtype . |
Series.autocorr ([lag, split_every]) |
Compute the lag-N autocorrelation. |
Series.between (left, right[, inclusive]) |
Return boolean Series equivalent to left <= series <= right. |
Series.bfill ([axis, limit]) |
Synonym for DataFrame.fillna() with method='bfill' . |
Series.cat |
|
Series.clear_divisions () |
Forget division information |
Series.clip ([lower, upper, out]) |
Trim values at input threshold(s). |
Series.clip_lower (threshold) |
Trim values below a given threshold. |
Series.clip_upper (threshold) |
Trim values above a given threshold. |
Series.compute (**kwargs) |
Compute this dask collection |
Series.copy () |
Make a copy of the dataframe |
Series.corr (other[, method, min_periods, …]) |
Compute correlation with other Series, excluding missing values. |
Series.count ([split_every]) |
Return number of non-NA/null observations in the Series. |
Series.cov (other[, min_periods, split_every]) |
Compute covariance with Series, excluding missing values. |
Series.cummax ([axis, skipna, out]) |
Return cumulative maximum over a DataFrame or Series axis. |
Series.cummin ([axis, skipna, out]) |
Return cumulative minimum over a DataFrame or Series axis. |
Series.cumprod ([axis, skipna, dtype, out]) |
Return cumulative product over a DataFrame or Series axis. |
Series.cumsum ([axis, skipna, dtype, out]) |
Return cumulative sum over a DataFrame or Series axis. |
Series.describe ([split_every, percentiles, …]) |
Generate descriptive statistics that summarize the central tendency, dispersion and shape of a dataset’s distribution, excluding NaN values. |
Series.diff ([periods, axis]) |
First discrete difference of element. |
Series.div (other[, level, fill_value, axis]) |
Return Floating division of series and other, element-wise (binary operator truediv). |
Series.drop_duplicates ([subset, …]) |
Return DataFrame with duplicate rows removed, optionally only considering certain columns. |
Series.dropna () |
Return a new Series with missing values removed. |
Series.dt |
Namespace of datetime methods |
Series.dtype |
Return data type |
Series.eq (other[, level, fill_value, axis]) |
Return Equal to of series and other, element-wise (binary operator eq). |
Series.explode () |
Transform each element of a list-like to a row, replicating the index values. |
Series.ffill ([axis, limit]) |
Synonym for DataFrame.fillna() with method='ffill' . |
Series.fillna ([value, method, limit, axis]) |
Fill NA/NaN values using the specified method. |
Series.first (offset) |
Convenience method for subsetting initial periods of time series data based on a date offset. |
Series.floordiv (other[, level, fill_value, axis]) |
Return Integer division of series and other, element-wise (binary operator floordiv). |
Series.ge (other[, level, fill_value, axis]) |
Return Greater than or equal to of series and other, element-wise (binary operator ge). |
Series.get_partition (n) |
Get a dask DataFrame/Series representing the nth partition. |
Series.groupby ([by]) |
Group DataFrame or Series using a mapper or by a Series of columns. |
Series.gt (other[, level, fill_value, axis]) |
Return Greater than of series and other, element-wise (binary operator gt). |
Series.head ([n, npartitions, compute]) |
First n rows of the dataset |
Series.idxmax ([axis, skipna, split_every]) |
Return index of first occurrence of maximum over requested axis. |
Series.idxmin ([axis, skipna, split_every]) |
Return index of first occurrence of minimum over requested axis. |
Series.isin (values) |
Check whether values are contained in Series. |
Series.isna () |
Detect missing values. |
Series.isnull () |
Detect missing values. |
Series.iteritems () |
Lazily iterate over (index, value) tuples. |
Series.known_divisions |
Whether divisions are already known |
Series.last (offset) |
Convenience method for subsetting final periods of time series data based on a date offset. |
Series.le (other[, level, fill_value, axis]) |
Return Less than or equal to of series and other, element-wise (binary operator le). |
Series.loc |
Purely label-location based indexer for selection by label. |
Series.lt (other[, level, fill_value, axis]) |
Return Less than of series and other, element-wise (binary operator lt). |
Series.map (arg[, na_action, meta]) |
Map values of Series according to input correspondence. |
Series.map_overlap (func, before, after, …) |
Apply a function to each partition, sharing rows with adjacent partitions. |
Series.map_partitions (func, *args, **kwargs) |
Apply Python function on each DataFrame partition. |
Series.mask (cond[, other]) |
Replace values where the condition is True. |
Series.max ([axis, skipna, split_every, out]) |
Return the maximum of the values for the requested axis. |
Series.mean ([axis, skipna, split_every, …]) |
Return the mean of the values for the requested axis. |
Series.memory_usage ([index, deep]) |
Return the memory usage of the Series. |
Series.min ([axis, skipna, split_every, out]) |
Return the minimum of the values for the requested axis. |
Series.mod (other[, level, fill_value, axis]) |
Return Modulo of series and other, element-wise (binary operator mod). |
Series.mul (other[, level, fill_value, axis]) |
Return Multiplication of series and other, element-wise (binary operator mul). |
Series.nbytes |
Number of bytes |
Series.ndim |
Return dimensionality |
Series.ne (other[, level, fill_value, axis]) |
Return Not equal to of series and other, element-wise (binary operator ne). |
Series.nlargest ([n, split_every]) |
Return the largest n elements. |
Series.notnull () |
Detect existing (non-missing) values. |
Series.nsmallest ([n, split_every]) |
Return the smallest n elements. |
Series.nunique ([split_every]) |
Return number of unique elements in the object. |
Series.nunique_approx ([split_every]) |
Approximate number of unique rows. |
Series.persist (**kwargs) |
Persist this dask collection into memory |
Series.pipe (func, *args, **kwargs) |
Apply func(self, *args, **kwargs). |
Series.pow (other[, level, fill_value, axis]) |
Return Exponential power of series and other, element-wise (binary operator pow). |
Series.prod ([axis, skipna, split_every, …]) |
Return the product of the values for the requested axis. |
Series.quantile ([q, method]) |
Approximate quantiles of Series |
Series.radd (other[, level, fill_value, axis]) |
Return Addition of series and other, element-wise (binary operator radd). |
Series.random_split (frac[, random_state]) |
Pseudorandomly split dataframe into different pieces row-wise |
Series.rdiv (other[, level, fill_value, axis]) |
Return Floating division of series and other, element-wise (binary operator rtruediv). |
Series.reduction (chunk[, aggregate, …]) |
Generic row-wise reductions. |
Series.repartition ([divisions, npartitions, …]) |
Repartition dataframe along new divisions |
Series.replace ([to_replace, value, regex]) |
Replace values given in to_replace with value. |
Series.rename ([index, inplace, sorted_index]) |
Alter Series index labels or name |
Series.resample (rule[, closed, label]) |
Resample time-series data. |
Series.reset_index ([drop]) |
Reset the index to the default index. |
Series.rolling (window[, min_periods, …]) |
Provides rolling transformations. |
Series.round ([decimals]) |
Round each value in a Series to the given number of decimals. |
Series.sample ([n, frac, replace, random_state]) |
Random sample of items |
Series.sem ([axis, skipna, ddof, split_every]) |
Return unbiased standard error of the mean over requested axis. |
Series.shape |
Return a tuple representing the dimensionality of a Series. |
Series.shift ([periods, freq, axis]) |
Shift index by desired number of periods with an optional time freq. |
Series.size |
Size of the Series or DataFrame as a Delayed object. |
Series.std ([axis, skipna, ddof, …]) |
Return sample standard deviation over requested axis. |
Series.str |
Namespace for string methods |
Series.sub (other[, level, fill_value, axis]) |
Return Subtraction of series and other, element-wise (binary operator sub). |
Series.sum ([axis, skipna, split_every, …]) |
Return the sum of the values for the requested axis. |
Series.to_bag ([index]) |
Create a Dask Bag from a Series |
Series.to_csv (filename, **kwargs) |
Store Dask DataFrame to CSV files |
Series.to_dask_array ([lengths]) |
Convert a dask DataFrame to a dask array. |
Series.to_delayed ([optimize_graph]) |
Convert into a list of dask.delayed objects, one per partition. |
Series.to_frame ([name]) |
Convert Series to DataFrame. |
Series.to_hdf (path_or_buf, key[, mode, append]) |
Store Dask Dataframe to Hierarchical Data Format (HDF) files |
Series.to_string ([max_rows]) |
Render a string representation of the Series. |
Series.to_timestamp ([freq, how, axis]) |
Cast to DatetimeIndex of timestamps, at beginning of period. |
Series.truediv (other[, level, fill_value, axis]) |
Return Floating division of series and other, element-wise (binary operator truediv). |
Series.unique ([split_every, split_out]) |
Return Series of unique values in the object. |
Series.value_counts ([split_every, split_out]) |
Return a Series containing counts of unique values. |
Series.values |
Return a dask.array of the values of this dataframe |
Series.var ([axis, skipna, ddof, …]) |
Return unbiased variance over requested axis. |
Series.visualize ([filename, format, …]) |
Render the computation of this object’s task graph using graphviz. |
Series.where (cond[, other]) |
Replace values where the condition is False. |
Groupby Operations¶
DataFrameGroupBy.aggregate (arg[, …]) |
Aggregate using one or more operations over the specified axis. |
DataFrameGroupBy.apply (func, *args, **kwargs) |
Parallel version of pandas GroupBy.apply |
DataFrameGroupBy.count ([split_every, split_out]) |
Compute count of group, excluding missing values. |
DataFrameGroupBy.cumcount ([axis]) |
Number each item in each group from 0 to the length of that group - 1. |
DataFrameGroupBy.cumprod ([axis]) |
Cumulative product for each group. |
DataFrameGroupBy.cumsum ([axis]) |
Cumulative sum for each group. |
DataFrameGroupBy.get_group (key) |
Construct DataFrame from group with provided name. |
DataFrameGroupBy.max ([split_every, split_out]) |
Compute max of group values. |
DataFrameGroupBy.mean ([split_every, split_out]) |
Compute mean of groups, excluding missing values. |
DataFrameGroupBy.min ([split_every, split_out]) |
Compute min of group values. |
DataFrameGroupBy.size ([split_every, split_out]) |
Compute group sizes. |
DataFrameGroupBy.std ([ddof, split_every, …]) |
Compute standard deviation of groups, excluding missing values. |
DataFrameGroupBy.sum ([split_every, …]) |
Compute sum of group values. |
DataFrameGroupBy.var ([ddof, split_every, …]) |
Compute variance of groups, excluding missing values. |
DataFrameGroupBy.cov ([ddof, split_every, …]) |
Compute pairwise covariance of columns, excluding NA/null values. |
DataFrameGroupBy.corr ([ddof, split_every, …]) |
Compute pairwise correlation of columns, excluding NA/null values. |
DataFrameGroupBy.first ([split_every, split_out]) |
Compute first of group values. |
DataFrameGroupBy.last ([split_every, split_out]) |
Compute last of group values. |
DataFrameGroupBy.idxmin ([split_every, …]) |
Return index of first occurrence of minimum over requested axis. |
DataFrameGroupBy.idxmax ([split_every, …]) |
Return index of first occurrence of maximum over requested axis. |
SeriesGroupBy.aggregate (arg[, split_every, …]) |
Aggregate using one or more operations over the specified axis. |
SeriesGroupBy.apply (func, *args, **kwargs) |
Parallel version of pandas GroupBy.apply |
SeriesGroupBy.count ([split_every, split_out]) |
Compute count of group, excluding missing values. |
SeriesGroupBy.cumcount ([axis]) |
Number each item in each group from 0 to the length of that group - 1. |
SeriesGroupBy.cumprod ([axis]) |
Cumulative product for each group. |
SeriesGroupBy.cumsum ([axis]) |
Cumulative sum for each group. |
SeriesGroupBy.get_group (key) |
Construct DataFrame from group with provided name. |
SeriesGroupBy.max ([split_every, split_out]) |
Compute max of group values. |
SeriesGroupBy.mean ([split_every, split_out]) |
Compute mean of groups, excluding missing values. |
SeriesGroupBy.min ([split_every, split_out]) |
Compute min of group values. |
SeriesGroupBy.nunique ([split_every, split_out]) |
|
SeriesGroupBy.size ([split_every, split_out]) |
Compute group sizes. |
SeriesGroupBy.std ([ddof, split_every, split_out]) |
Compute standard deviation of groups, excluding missing values. |
SeriesGroupBy.sum ([split_every, split_out, …]) |
Compute sum of group values. |
SeriesGroupBy.var ([ddof, split_every, split_out]) |
Compute variance of groups, excluding missing values. |
SeriesGroupBy.first ([split_every, split_out]) |
Compute first of group values. |
SeriesGroupBy.last ([split_every, split_out]) |
Compute last of group values. |
SeriesGroupBy.idxmin ([split_every, …]) |
Return index of first occurrence of minimum over requested axis. |
SeriesGroupBy.idxmax ([split_every, …]) |
Return index of first occurrence of maximum over requested axis. |
Aggregation (name, chunk, agg[, finalize]) |
User defined groupby-aggregation. |
Rolling Operations¶
rolling.map_overlap (func, df, before, after, …) |
Apply a function to each partition, sharing rows with adjacent partitions. |
Series.rolling (window[, min_periods, …]) |
Provides rolling transformations. |
DataFrame.rolling (window[, min_periods, …]) |
Provides rolling transformations. |
Rolling.apply (func[, raw, engine, …]) |
The rolling function’s apply function. |
Rolling.count () |
The rolling count of any non-NaN observations inside the window. |
Rolling.kurt () |
Calculate unbiased rolling kurtosis. |
Rolling.max () |
Calculate the rolling maximum. |
Rolling.mean () |
Calculate the rolling mean of the values. |
Rolling.median () |
Calculate the rolling median. |
Rolling.min () |
Calculate the rolling minimum. |
Rolling.quantile (quantile) |
Calculate the rolling quantile. |
Rolling.skew () |
Unbiased rolling skewness. |
Rolling.std ([ddof]) |
Calculate rolling standard deviation. |
Rolling.sum () |
Calculate rolling sum of given DataFrame or Series. |
Rolling.var ([ddof]) |
Calculate unbiased rolling variance. |
Create DataFrames¶
read_csv (urlpath[, blocksize, collection, …]) |
Read CSV files into a Dask.DataFrame |
read_table (urlpath[, blocksize, collection, …]) |
Read delimited files into a Dask.DataFrame |
read_fwf (urlpath[, blocksize, collection, …]) |
Read fixed-width files into a Dask.DataFrame |
read_parquet (path[, columns, filters, …]) |
Read a Parquet file into a Dask DataFrame |
read_hdf (pattern, key[, start, stop, …]) |
Read HDF files into a Dask DataFrame |
read_json (url_path[, orient, lines, …]) |
Create a dataframe from a set of JSON files |
read_orc (path[, columns, storage_options]) |
Read dataframe from ORC file(s) |
read_sql_table (table, uri, index_col[, …]) |
Create dataframe from an SQL table. |
from_array (x[, chunksize, columns]) |
Read any slicable array into a Dask Dataframe |
from_bcolz (x[, chunksize, categorize, …]) |
Read BColz CTable into a Dask Dataframe |
from_dask_array (x[, columns, index]) |
Create a Dask DataFrame from a Dask Array. |
from_delayed (dfs[, meta, divisions, prefix, …]) |
Create Dask DataFrame from many Dask Delayed objects |
from_pandas (data[, npartitions, chunksize, …]) |
Construct a Dask DataFrame from a Pandas DataFrame |
dask.bag.core.Bag.to_dataframe ([meta, columns]) |
Create Dask Dataframe from a Dask Bag. |
Store DataFrames¶
to_csv (df, filename[, single_file, …]) |
Store Dask DataFrame to CSV files |
to_parquet (df, path[, engine, compression, …]) |
Store Dask.dataframe to Parquet files |
to_hdf (df, path, key[, mode, append, …]) |
Store Dask Dataframe to Hierarchical Data Format (HDF) files |
to_records (df) |
Create Dask Array from a Dask Dataframe |
to_bag (df[, index]) |
Create Dask Bag from a Dask DataFrame |
to_json (df, url_path[, orient, lines, …]) |
Write dataframe into JSON text files |
Convert DataFrames¶
DataFrame.to_dask_array ([lengths]) |
Convert a dask DataFrame to a dask array. |
DataFrame.to_delayed ([optimize_graph]) |
Convert into a list of dask.delayed objects, one per partition. |
Reshape DataFrames¶
get_dummies (data[, prefix, prefix_sep, …]) |
Convert categorical variable into dummy/indicator variables. |
pivot_table (df[, index, columns, values, …]) |
Create a spreadsheet-style pivot table as a DataFrame. |
melt (frame[, id_vars, value_vars, var_name, …]) |
Unpivots a DataFrame from wide format to long format, optionally leaving identifier variables set. |
DataFrame Methods¶
-
class
dask.dataframe.
DataFrame
(dsk, name, meta, divisions)¶ Parallel Pandas DataFrame
Do not use this class directly. Instead use functions like
dd.read_csv
,dd.read_parquet
, ordd.from_pandas
.Parameters: dsk: dict
The dask graph to compute this DataFrame
name: str
The key prefix that specifies which keys in the dask comprise this particular DataFrame
meta: pandas.DataFrame
An empty
pandas.DataFrame
with names, dtypes, and index matching the expected output.divisions: tuple of index values
Values along which we partition our blocks on the index
-
abs
()¶ Return a Series/DataFrame with absolute numeric value of each element.
This docstring was copied from pandas.core.frame.DataFrame.abs.
Some inconsistencies with the Dask version may exist.
This function only applies to elements that are all numeric.
Returns: abs
Series/DataFrame containing the absolute value of each element.
See also
numpy.absolute
- Calculate the absolute value element-wise.
Notes
For
complex
inputs,1.2 + 1j
, the absolute value is \(\sqrt{ a^2 + b^2 }\).Examples
Absolute numeric values in a Series.
>>> s = pd.Series([-1.10, 2, -3.33, 4]) # doctest: +SKIP >>> s.abs() # doctest: +SKIP 0 1.10 1 2.00 2 3.33 3 4.00 dtype: float64
Absolute numeric values in a Series with complex numbers.
>>> s = pd.Series([1.2 + 1j]) # doctest: +SKIP >>> s.abs() # doctest: +SKIP 0 1.56205 dtype: float64
Absolute numeric values in a Series with a Timedelta element.
>>> s = pd.Series([pd.Timedelta('1 days')]) # doctest: +SKIP >>> s.abs() # doctest: +SKIP 0 1 days dtype: timedelta64[ns]
Select rows with data closest to certain value using argsort (from StackOverflow).
>>> df = pd.DataFrame({ # doctest: +SKIP ... 'a': [4, 5, 6, 7], ... 'b': [10, 20, 30, 40], ... 'c': [100, 50, -30, -50] ... }) >>> df # doctest: +SKIP a b c 0 4 10 100 1 5 20 50 2 6 30 -30 3 7 40 -50 >>> df.loc[(df.c - 43).abs().argsort()] # doctest: +SKIP a b c 1 5 20 50 0 4 10 100 2 6 30 -30 3 7 40 -50
-
add
(other, axis='columns', level=None, fill_value=None)¶ Get Addition of dataframe and other, element-wise (binary operator add).
This docstring was copied from pandas.core.frame.DataFrame.add.
Some inconsistencies with the Dask version may exist.
Equivalent to
dataframe + other
, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, radd.Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
fill_value : float or None, default None
Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.
Returns: DataFrame
Result of the arithmetic operation.
See also
DataFrame.add
- Add DataFrames.
DataFrame.sub
- Subtract DataFrames.
DataFrame.mul
- Multiply DataFrames.
DataFrame.div
- Divide DataFrames (float division).
DataFrame.truediv
- Divide DataFrames (float division).
DataFrame.floordiv
- Divide DataFrames (integer division).
DataFrame.mod
- Calculate modulo (remainder after division).
DataFrame.pow
- Calculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4], # doctest: +SKIP ... 'degrees': [360, 180, 360]}, ... index=['circle', 'triangle', 'rectangle']) >>> df # doctest: +SKIP angles degrees circle 0 360 triangle 3 180 rectangle 4 360
Add a scalar with operator version which return the same results.
>>> df + 1 # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
>>> df.add(1) # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10) # doctest: +SKIP angles degrees circle 0.0 36.0 triangle 0.3 18.0 rectangle 0.4 36.0
>>> df.rdiv(10) # doctest: +SKIP angles degrees circle inf 0.027778 triangle 3.333333 0.055556 rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2] # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub([1, 2], axis='columns') # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']), # doctest: +SKIP ... axis='index') angles degrees circle -1 359 triangle 2 179 rectangle 3 359
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]}, # doctest: +SKIP ... index=['circle', 'triangle', 'rectangle']) >>> other # doctest: +SKIP angles circle 0 triangle 3 rectangle 4
>>> df * other # doctest: +SKIP angles degrees circle 0 NaN triangle 9 NaN rectangle 16 NaN
>>> df.mul(other, fill_value=0) # doctest: +SKIP angles degrees circle 0 0.0 triangle 9 0.0 rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6], # doctest: +SKIP ... 'degrees': [360, 180, 360, 360, 540, 720]}, ... index=[['A', 'A', 'A', 'B', 'B', 'B'], ... ['circle', 'triangle', 'rectangle', ... 'square', 'pentagon', 'hexagon']]) >>> df_multindex # doctest: +SKIP angles degrees A circle 0 360 triangle 3 180 rectangle 4 360 B square 4 360 pentagon 5 540 hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0) # doctest: +SKIP angles degrees A circle NaN 1.0 triangle 1.0 1.0 rectangle 1.0 1.0 B square 0.0 0.0 pentagon 0.0 0.0 hexagon 0.0 0.0
-
align
(other, join='outer', axis=None, fill_value=None)¶ Align two objects on their axes with the specified join method for each axis Index.
This docstring was copied from pandas.core.frame.DataFrame.align.
Some inconsistencies with the Dask version may exist.
Parameters: other : DataFrame or Series
join : {‘outer’, ‘inner’, ‘left’, ‘right’}, default ‘outer’
axis : allowed axis of the other object, default None
Align on index (0), columns (1), or both (None)
level : int or level name, default None (Not supported in Dask)
Broadcast across a level, matching Index values on the passed MultiIndex level
copy : boolean, default True (Not supported in Dask)
Always returns new objects. If copy=False and no reindexing is required then original objects are returned.
fill_value : scalar, default np.NaN
Value to use for missing values. Defaults to NaN, but can be any “compatible” value
method : {‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, None}, default None (Not supported in Dask)
Method to use for filling holes in reindexed Series pad / ffill: propagate last valid observation forward to next valid backfill / bfill: use NEXT valid observation to fill gap
limit : int, default None (Not supported in Dask)
If method is specified, this is the maximum number of consecutive NaN values to forward/backward fill. In other words, if there is a gap with more than this number of consecutive NaNs, it will only be partially filled. If method is not specified, this is the maximum number of entries along the entire axis where NaNs will be filled. Must be greater than 0 if not None.
fill_axis : {0 or ‘index’, 1 or ‘columns’}, default 0 (Not supported in Dask)
Filling axis, method and limit
broadcast_axis : {0 or ‘index’, 1 or ‘columns’}, default None (Not supported in Dask)
Broadcast values along this axis, if aligning two objects of different dimensions
Returns: (left, right) : (DataFrame, type of other)
Aligned objects.
-
all
(axis=None, skipna=True, split_every=False, out=None)¶ Return whether all elements are True, potentially over an axis.
This docstring was copied from pandas.core.frame.DataFrame.all.
Some inconsistencies with the Dask version may exist.
Returns True unless there at least one element within a series or along a Dataframe axis that is False or equivalent (e.g. zero or empty).
Parameters: axis : {0 or ‘index’, 1 or ‘columns’, None}, default 0
Indicate which axis or axes should be reduced.
- 0 / ‘index’ : reduce the index, return a Series whose index is the original column labels.
- 1 / ‘columns’ : reduce the columns, return a Series whose index is the original index.
- None : reduce all axes, return a scalar.
bool_only : bool, default None (Not supported in Dask)
Include only boolean columns. If None, will attempt to use everything, then use only boolean data. Not implemented for Series.
skipna : bool, default True
Exclude NA/null values. If the entire row/column is NA and skipna is True, then the result will be True, as for an empty row/column. If skipna is False, then NA are treated as True, because these are not equal to zero.
level : int or level name, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.
**kwargs : any, default None
Additional keywords have no effect but might be accepted for compatibility with NumPy.
Returns: Series or DataFrame
If level is specified, then, DataFrame is returned; otherwise, Series is returned.
See also
Series.all
- Return True if all elements are True.
DataFrame.any
- Return True if one (or more) elements are True.
Examples
Series
>>> pd.Series([True, True]).all() # doctest: +SKIP True >>> pd.Series([True, False]).all() # doctest: +SKIP False >>> pd.Series([]).all() # doctest: +SKIP True >>> pd.Series([np.nan]).all() # doctest: +SKIP True >>> pd.Series([np.nan]).all(skipna=False) # doctest: +SKIP True
DataFrames
Create a dataframe from a dictionary.
>>> df = pd.DataFrame({'col1': [True, True], 'col2': [True, False]}) # doctest: +SKIP >>> df # doctest: +SKIP col1 col2 0 True True 1 True False
Default behaviour checks if column-wise values all return True.
>>> df.all() # doctest: +SKIP col1 True col2 False dtype: bool
Specify
axis='columns'
to check if row-wise values all return True.>>> df.all(axis='columns') # doctest: +SKIP 0 True 1 False dtype: bool
Or
axis=None
for whether every value is True.>>> df.all(axis=None) # doctest: +SKIP False
-
any
(axis=None, skipna=True, split_every=False, out=None)¶ Return whether any element is True, potentially over an axis.
This docstring was copied from pandas.core.frame.DataFrame.any.
Some inconsistencies with the Dask version may exist.
Returns False unless there at least one element within a series or along a Dataframe axis that is True or equivalent (e.g. non-zero or non-empty).
Parameters: axis : {0 or ‘index’, 1 or ‘columns’, None}, default 0
Indicate which axis or axes should be reduced.
- 0 / ‘index’ : reduce the index, return a Series whose index is the original column labels.
- 1 / ‘columns’ : reduce the columns, return a Series whose index is the original index.
- None : reduce all axes, return a scalar.
bool_only : bool, default None (Not supported in Dask)
Include only boolean columns. If None, will attempt to use everything, then use only boolean data. Not implemented for Series.
skipna : bool, default True
Exclude NA/null values. If the entire row/column is NA and skipna is True, then the result will be False, as for an empty row/column. If skipna is False, then NA are treated as True, because these are not equal to zero.
level : int or level name, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.
**kwargs : any, default None
Additional keywords have no effect but might be accepted for compatibility with NumPy.
Returns: Series or DataFrame
If level is specified, then, DataFrame is returned; otherwise, Series is returned.
See also
numpy.any
- Numpy version of this method.
Series.any
- Return whether any element is True.
Series.all
- Return whether all elements are True.
DataFrame.any
- Return whether any element is True over requested axis.
DataFrame.all
- Return whether all elements are True over requested axis.
Examples
Series
For Series input, the output is a scalar indicating whether any element is True.
>>> pd.Series([False, False]).any() # doctest: +SKIP False >>> pd.Series([True, False]).any() # doctest: +SKIP True >>> pd.Series([]).any() # doctest: +SKIP False >>> pd.Series([np.nan]).any() # doctest: +SKIP False >>> pd.Series([np.nan]).any(skipna=False) # doctest: +SKIP True
DataFrame
Whether each column contains at least one True element (the default).
>>> df = pd.DataFrame({"A": [1, 2], "B": [0, 2], "C": [0, 0]}) # doctest: +SKIP >>> df # doctest: +SKIP A B C 0 1 0 0 1 2 2 0
>>> df.any() # doctest: +SKIP A True B True C False dtype: bool
Aggregating over the columns.
>>> df = pd.DataFrame({"A": [True, False], "B": [1, 2]}) # doctest: +SKIP >>> df # doctest: +SKIP A B 0 True 1 1 False 2
>>> df.any(axis='columns') # doctest: +SKIP 0 True 1 True dtype: bool
>>> df = pd.DataFrame({"A": [True, False], "B": [1, 0]}) # doctest: +SKIP >>> df # doctest: +SKIP A B 0 True 1 1 False 0
>>> df.any(axis='columns') # doctest: +SKIP 0 True 1 False dtype: bool
Aggregating over the entire DataFrame with
axis=None
.>>> df.any(axis=None) # doctest: +SKIP True
any for an empty DataFrame is an empty Series.
>>> pd.DataFrame([]).any() # doctest: +SKIP Series([], dtype: bool)
-
append
(other, interleave_partitions=False)¶ Append rows of other to the end of caller, returning a new object.
This docstring was copied from pandas.core.frame.DataFrame.append.
Some inconsistencies with the Dask version may exist.
Columns in other that are not in the caller are added as new columns.
Parameters: other : DataFrame or Series/dict-like object, or list of these
The data to append.
ignore_index : boolean, default False (Not supported in Dask)
If True, do not use the index labels.
verify_integrity : boolean, default False (Not supported in Dask)
If True, raise ValueError on creating index with duplicates.
sort : boolean, default None (Not supported in Dask)
Sort columns if the columns of self and other are not aligned. The default sorting is deprecated and will change to not-sorting in a future version of pandas. Explicitly pass
sort=True
to silence the warning and sort. Explicitly passsort=False
to silence the warning and not sort.New in version 0.23.0.
Returns: DataFrame
See also
concat
- General function to concatenate DataFrame or Series objects.
Notes
If a list of dict/series is passed and the keys are all contained in the DataFrame’s index, the order of the columns in the resulting DataFrame will be unchanged.
Iteratively appending rows to a DataFrame can be more computationally intensive than a single concatenate. A better solution is to append those rows to a list and then concatenate the list with the original DataFrame all at once.
Examples
>>> df = pd.DataFrame([[1, 2], [3, 4]], columns=list('AB')) # doctest: +SKIP >>> df # doctest: +SKIP A B 0 1 2 1 3 4 >>> df2 = pd.DataFrame([[5, 6], [7, 8]], columns=list('AB')) # doctest: +SKIP >>> df.append(df2) # doctest: +SKIP A B 0 1 2 1 3 4 0 5 6 1 7 8
With ignore_index set to True:
>>> df.append(df2, ignore_index=True) # doctest: +SKIP A B 0 1 2 1 3 4 2 5 6 3 7 8
The following, while not recommended methods for generating DataFrames, show two ways to generate a DataFrame from multiple data sources.
Less efficient:
>>> df = pd.DataFrame(columns=['A']) # doctest: +SKIP >>> for i in range(5): # doctest: +SKIP ... df = df.append({'A': i}, ignore_index=True) >>> df # doctest: +SKIP A 0 0 1 1 2 2 3 3 4 4
More efficient:
>>> pd.concat([pd.DataFrame([i], columns=['A']) for i in range(5)], # doctest: +SKIP ... ignore_index=True) A 0 0 1 1 2 2 3 3 4 4
-
apply
(func, axis=0, broadcast=None, raw=False, reduce=None, args=(), meta='__no_default__', **kwds)¶ Parallel version of pandas.DataFrame.apply
This mimics the pandas version except for the following:
- Only
axis=1
is supported (and must be specified explicitly). - The user should provide output metadata via the meta keyword.
Parameters: func : function
Function to apply to each column/row
axis : {0 or ‘index’, 1 or ‘columns’}, default 0
- 0 or ‘index’: apply function to each column (NOT SUPPORTED)
- 1 or ‘columns’: apply function to each row
meta : pd.DataFrame, pd.Series, dict, iterable, tuple, optional
An empty
pd.DataFrame
orpd.Series
that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of aDataFrame
, adict
of{name: dtype}
or iterable of(name, dtype)
can be provided (note that the order of the names should match the order of the columns). Instead of a series, a tuple of(name, dtype)
can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providingmeta
is recommended. For more information, seedask.dataframe.utils.make_meta
.args : tuple
Positional arguments to pass to function in addition to the array/series
Additional keyword arguments will be passed as keywords to the function
Returns: applied : Series or DataFrame
See also
dask.DataFrame.map_partitions
Examples
>>> import dask.dataframe as dd >>> df = pd.DataFrame({'x': [1, 2, 3, 4, 5], ... 'y': [1., 2., 3., 4., 5.]}) >>> ddf = dd.from_pandas(df, npartitions=2)
Apply a function to row-wise passing in extra arguments in
args
andkwargs
:>>> def myadd(row, a, b=1): ... return row.sum() + a + b >>> res = ddf.apply(myadd, axis=1, args=(2,), b=1.5) # doctest: +SKIP
By default, dask tries to infer the output metadata by running your provided function on some fake data. This works well in many cases, but can sometimes be expensive, or even fail. To avoid this, you can manually specify the output metadata with the
meta
keyword. This can be specified in many forms, for more information seedask.dataframe.utils.make_meta
.Here we specify the output is a Series with name
'x'
, and dtypefloat64
:>>> res = ddf.apply(myadd, axis=1, args=(2,), b=1.5, meta=('x', 'f8'))
In the case where the metadata doesn’t change, you can also pass in the object itself directly:
>>> res = ddf.apply(lambda row: row + 1, axis=1, meta=ddf)
- Only
-
applymap
(func, meta='__no_default__')¶ Apply a function to a Dataframe elementwise.
This docstring was copied from pandas.core.frame.DataFrame.applymap.
Some inconsistencies with the Dask version may exist.
This method applies a function that accepts and returns a scalar to every element of a DataFrame.
Parameters: func : callable
Python function, returns a single value from a single value.
Returns: DataFrame
Transformed DataFrame.
See also
DataFrame.apply
- Apply a function along input axis of DataFrame.
Notes
In the current implementation applymap calls func twice on the first column/row to decide whether it can take a fast or slow code path. This can lead to unexpected behavior if func has side-effects, as they will take effect twice for the first column/row.
Examples
>>> df = pd.DataFrame([[1, 2.12], [3.356, 4.567]]) # doctest: +SKIP >>> df # doctest: +SKIP 0 1 0 1.000 2.120 1 3.356 4.567
>>> df.applymap(lambda x: len(str(x))) # doctest: +SKIP 0 1 0 3 4 1 5 5
Note that a vectorized version of func often exists, which will be much faster. You could square each number elementwise.
>>> df.applymap(lambda x: x**2) # doctest: +SKIP 0 1 0 1.000000 4.494400 1 11.262736 20.857489
But it’s better to avoid applymap in that case.
>>> df ** 2 # doctest: +SKIP 0 1 0 1.000000 4.494400 1 11.262736 20.857489
-
assign
(**kwargs)¶ Assign new columns to a DataFrame.
This docstring was copied from pandas.core.frame.DataFrame.assign.
Some inconsistencies with the Dask version may exist.
Returns a new object with all original columns in addition to new ones. Existing columns that are re-assigned will be overwritten.
Parameters: **kwargs : dict of {str: callable or Series}
The column names are keywords. If the values are callable, they are computed on the DataFrame and assigned to the new columns. The callable must not change input DataFrame (though pandas doesn’t check it). If the values are not callable, (e.g. a Series, scalar, or array), they are simply assigned.
Returns: DataFrame
A new DataFrame with the new columns in addition to all the existing columns.
Notes
Assigning multiple columns within the same
assign
is possible. For Python 3.6 and above, later items in ‘**kwargs’ may refer to newly created or modified columns in ‘df’; items are computed and assigned into ‘df’ in order. For Python 3.5 and below, the order of keyword arguments is not specified, you cannot refer to newly created or modified columns. All items are computed first, and then assigned in alphabetical order.Changed in version 0.23.0: Keyword argument order is maintained for Python 3.6 and later.
Examples
>>> df = pd.DataFrame({'temp_c': [17.0, 25.0]}, # doctest: +SKIP ... index=['Portland', 'Berkeley']) >>> df # doctest: +SKIP temp_c Portland 17.0 Berkeley 25.0
Where the value is a callable, evaluated on df:
>>> df.assign(temp_f=lambda x: x.temp_c * 9 / 5 + 32) # doctest: +SKIP temp_c temp_f Portland 17.0 62.6 Berkeley 25.0 77.0
Alternatively, the same behavior can be achieved by directly referencing an existing Series or sequence:
>>> df.assign(temp_f=df['temp_c'] * 9 / 5 + 32) # doctest: +SKIP temp_c temp_f Portland 17.0 62.6 Berkeley 25.0 77.0
In Python 3.6+, you can create multiple columns within the same assign where one of the columns depends on another one defined within the same assign:
>>> df.assign(temp_f=lambda x: x['temp_c'] * 9 / 5 + 32, # doctest: +SKIP ... temp_k=lambda x: (x['temp_f'] + 459.67) * 5 / 9) temp_c temp_f temp_k Portland 17.0 62.6 290.15 Berkeley 25.0 77.0 298.15
-
astype
(dtype)¶ Cast a pandas object to a specified dtype
dtype
.This docstring was copied from pandas.core.frame.DataFrame.astype.
Some inconsistencies with the Dask version may exist.
Parameters: dtype : data type, or dict of column name -> data type
Use a numpy.dtype or Python type to cast entire pandas object to the same type. Alternatively, use {col: dtype, …}, where col is a column label and dtype is a numpy.dtype or Python type to cast one or more of the DataFrame’s columns to column-specific types.
copy : bool, default True (Not supported in Dask)
Return a copy when
copy=True
(be very careful settingcopy=False
as changes to values then may propagate to other pandas objects).errors : {‘raise’, ‘ignore’}, default ‘raise’ (Not supported in Dask)
Control raising of exceptions on invalid data for provided dtype.
raise
: allow exceptions to be raisedignore
: suppress exceptions. On error return original object
New in version 0.20.0.
kwargs : keyword arguments to pass on to the constructor
Returns: casted : same type as caller
See also
to_datetime
- Convert argument to datetime.
to_timedelta
- Convert argument to timedelta.
to_numeric
- Convert argument to a numeric type.
numpy.ndarray.astype
- Cast a numpy array to a specified type.
Examples
Create a DataFrame:
>>> d = {'col1': [1, 2], 'col2': [3, 4]} # doctest: +SKIP >>> df = pd.DataFrame(data=d) # doctest: +SKIP >>> df.dtypes # doctest: +SKIP col1 int64 col2 int64 dtype: object
Cast all columns to int32:
>>> df.astype('int32').dtypes # doctest: +SKIP col1 int32 col2 int32 dtype: object
Cast col1 to int32 using a dictionary:
>>> df.astype({'col1': 'int32'}).dtypes # doctest: +SKIP col1 int32 col2 int64 dtype: object
Create a series:
>>> ser = pd.Series([1, 2], dtype='int32') # doctest: +SKIP >>> ser # doctest: +SKIP 0 1 1 2 dtype: int32 >>> ser.astype('int64') # doctest: +SKIP 0 1 1 2 dtype: int64
Convert to categorical type:
>>> ser.astype('category') # doctest: +SKIP 0 1 1 2 dtype: category Categories (2, int64): [1, 2]
Convert to ordered categorical type with custom ordering:
>>> cat_dtype = pd.api.types.CategoricalDtype( # doctest: +SKIP ... categories=[2, 1], ordered=True) >>> ser.astype(cat_dtype) # doctest: +SKIP 0 1 1 2 dtype: category Categories (2, int64): [2 < 1]
Note that using
copy=False
and changing data on a new pandas object may propagate changes:>>> s1 = pd.Series([1,2]) # doctest: +SKIP >>> s2 = s1.astype('int64', copy=False) # doctest: +SKIP >>> s2[0] = 10 # doctest: +SKIP >>> s1 # note that s1[0] has changed too # doctest: +SKIP 0 10 1 2 dtype: int64
-
bfill
(axis=None, limit=None)¶ Synonym for
DataFrame.fillna()
withmethod='bfill'
.This docstring was copied from pandas.core.frame.DataFrame.bfill.
Some inconsistencies with the Dask version may exist.
Returns: %(klass)s
Object with missing values filled.
-
categorize
(columns=None, index=None, split_every=None, **kwargs)¶ Convert columns of the DataFrame to category dtype.
Parameters: columns : list, optional
A list of column names to convert to categoricals. By default any column with an object dtype is converted to a categorical, and any unknown categoricals are made known.
index : bool, optional
Whether to categorize the index. By default, object indices are converted to categorical, and unknown categorical indices are made known. Set True to always categorize the index, False to never.
split_every : int, optional
Group partitions into groups of this size while performing a tree-reduction. If set to False, no tree-reduction will be used. Default is 16.
kwargs
Keyword arguments are passed on to compute.
-
clear_divisions
()¶ Forget division information
-
clip
(lower=None, upper=None, out=None)¶ Trim values at input threshold(s).
This docstring was copied from pandas.core.frame.DataFrame.clip.
Some inconsistencies with the Dask version may exist.
Assigns values outside boundary to boundary values. Thresholds can be singular values or array like, and in the latter case the clipping is performed element-wise in the specified axis.
Parameters: lower : float or array_like, default None
Minimum threshold value. All values below this threshold will be set to it.
upper : float or array_like, default None
Maximum threshold value. All values above this threshold will be set to it.
axis : int or str axis name, optional (Not supported in Dask)
Align object with lower and upper along the given axis.
inplace : bool, default False (Not supported in Dask)
Whether to perform the operation in place on the data.
New in version 0.21.0.
*args, **kwargs
Additional keywords have no effect but might be accepted for compatibility with numpy.
Returns: Series or DataFrame
Same type as calling object with the values outside the clip boundaries replaced.
Examples
>>> data = {'col_0': [9, -3, 0, -1, 5], 'col_1': [-2, -7, 6, 8, -5]} # doctest: +SKIP >>> df = pd.DataFrame(data) # doctest: +SKIP >>> df # doctest: +SKIP col_0 col_1 0 9 -2 1 -3 -7 2 0 6 3 -1 8 4 5 -5
Clips per column using lower and upper thresholds:
>>> df.clip(-4, 6) # doctest: +SKIP col_0 col_1 0 6 -2 1 -3 -4 2 0 6 3 -1 6 4 5 -4
Clips using specific lower and upper thresholds per column element:
>>> t = pd.Series([2, -4, -1, 6, 3]) # doctest: +SKIP >>> t # doctest: +SKIP 0 2 1 -4 2 -1 3 6 4 3 dtype: int64
>>> df.clip(t, t + 4, axis=0) # doctest: +SKIP col_0 col_1 0 6 2 1 -3 -4 2 0 3 3 6 8 4 5 3
-
clip_lower
(threshold)¶ Trim values below a given threshold.
This docstring was copied from pandas.core.frame.DataFrame.clip_lower.
Some inconsistencies with the Dask version may exist.
Deprecated since version 0.24.0: Use clip(lower=threshold) instead.
Elements below the threshold will be changed to match the threshold value(s). Threshold can be a single value or an array, in the latter case it performs the truncation element-wise.
Parameters: threshold : numeric or array-like
Minimum value allowed. All values below threshold will be set to this value.
- float : every value is compared to threshold.
- array-like : The shape of threshold should match the object
it’s compared to. When self is a Series, threshold should be
the length. When self is a DataFrame, threshold should 2-D
and the same shape as self for
axis=None
, or 1-D and the same length as the axis being compared.
axis : {0 or ‘index’, 1 or ‘columns’}, default 0 (Not supported in Dask)
Align self with threshold along the given axis.
inplace : bool, default False (Not supported in Dask)
Whether to perform the operation in place on the data.
New in version 0.21.0.
Returns: Series or DataFrame
Original data with values trimmed.
See also
Series.clip
- General purpose method to trim Series values to given threshold(s).
DataFrame.clip
- General purpose method to trim DataFrame values to given threshold(s).
Examples
Series single threshold clipping:
>>> s = pd.Series([5, 6, 7, 8, 9]) # doctest: +SKIP >>> s.clip(lower=8) # doctest: +SKIP 0 8 1 8 2 8 3 8 4 9 dtype: int64
Series clipping element-wise using an array of thresholds. threshold should be the same length as the Series.
>>> elemwise_thresholds = [4, 8, 7, 2, 5] # doctest: +SKIP >>> s.clip(lower=elemwise_thresholds) # doctest: +SKIP 0 5 1 8 2 7 3 8 4 9 dtype: int64
DataFrames can be compared to a scalar.
>>> df = pd.DataFrame({"A": [1, 3, 5], "B": [2, 4, 6]}) # doctest: +SKIP >>> df # doctest: +SKIP A B 0 1 2 1 3 4 2 5 6
>>> df.clip(lower=3) # doctest: +SKIP A B 0 3 3 1 3 4 2 5 6
Or to an array of values. By default, threshold should be the same shape as the DataFrame.
>>> df.clip(lower=np.array([[3, 4], [2, 2], [6, 2]])) # doctest: +SKIP A B 0 3 4 1 3 4 2 6 6
Control how threshold is broadcast with axis. In this case threshold should be the same length as the axis specified by axis.
>>> df.clip(lower=[3, 3, 5], axis='index') # doctest: +SKIP A B 0 3 3 1 3 4 2 5 6
>>> df.clip(lower=[4, 5], axis='columns') # doctest: +SKIP A B 0 4 5 1 4 5 2 5 6
-
clip_upper
(threshold)¶ Trim values above a given threshold.
This docstring was copied from pandas.core.frame.DataFrame.clip_upper.
Some inconsistencies with the Dask version may exist.
Deprecated since version 0.24.0: Use clip(upper=threshold) instead.
Elements above the threshold will be changed to match the threshold value(s). Threshold can be a single value or an array, in the latter case it performs the truncation element-wise.
Parameters: threshold : numeric or array-like
Maximum value allowed. All values above threshold will be set to this value.
- float : every value is compared to threshold.
- array-like : The shape of threshold should match the object
it’s compared to. When self is a Series, threshold should be
the length. When self is a DataFrame, threshold should 2-D
and the same shape as self for
axis=None
, or 1-D and the same length as the axis being compared.
axis : {0 or ‘index’, 1 or ‘columns’}, default 0 (Not supported in Dask)
Align object with threshold along the given axis.
inplace : bool, default False (Not supported in Dask)
Whether to perform the operation in place on the data.
New in version 0.21.0.
Returns: Series or DataFrame
Original data with values trimmed.
See also
Series.clip
- General purpose method to trim Series values to given threshold(s).
DataFrame.clip
- General purpose method to trim DataFrame values to given threshold(s).
Examples
>>> s = pd.Series([1, 2, 3, 4, 5]) # doctest: +SKIP >>> s # doctest: +SKIP 0 1 1 2 2 3 3 4 4 5 dtype: int64
>>> s.clip(upper=3) # doctest: +SKIP 0 1 1 2 2 3 3 3 4 3 dtype: int64
>>> elemwise_thresholds = [5, 4, 3, 2, 1] # doctest: +SKIP >>> elemwise_thresholds # doctest: +SKIP [5, 4, 3, 2, 1]
>>> s.clip(upper=elemwise_thresholds) # doctest: +SKIP 0 1 1 2 2 3 3 2 4 1 dtype: int64
-
combine
(other, func, fill_value=None, overwrite=True)¶ Perform column-wise combine with another DataFrame.
This docstring was copied from pandas.core.frame.DataFrame.combine.
Some inconsistencies with the Dask version may exist.
Combines a DataFrame with other DataFrame using func to element-wise combine columns. The row and column indexes of the resulting DataFrame will be the union of the two.
Parameters: other : DataFrame
The DataFrame to merge column-wise.
func : function
Function that takes two series as inputs and return a Series or a scalar. Used to merge the two dataframes column by columns.
fill_value : scalar value, default None
The value to fill NaNs with prior to passing any column to the merge func.
overwrite : bool, default True
If True, columns in self that do not exist in other will be overwritten with NaNs.
Returns: DataFrame
Combination of the provided DataFrames.
See also
DataFrame.combine_first
- Combine two DataFrame objects and default to non-null values in frame calling the method.
Examples
Combine using a simple function that chooses the smaller column.
>>> df1 = pd.DataFrame({'A': [0, 0], 'B': [4, 4]}) # doctest: +SKIP >>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]}) # doctest: +SKIP >>> take_smaller = lambda s1, s2: s1 if s1.sum() < s2.sum() else s2 # doctest: +SKIP >>> df1.combine(df2, take_smaller) # doctest: +SKIP A B 0 0 3 1 0 3
Example using a true element-wise combine function.
>>> df1 = pd.DataFrame({'A': [5, 0], 'B': [2, 4]}) # doctest: +SKIP >>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]}) # doctest: +SKIP >>> df1.combine(df2, np.minimum) # doctest: +SKIP A B 0 1 2 1 0 3
Using fill_value fills Nones prior to passing the column to the merge function.
>>> df1 = pd.DataFrame({'A': [0, 0], 'B': [None, 4]}) # doctest: +SKIP >>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]}) # doctest: +SKIP >>> df1.combine(df2, take_smaller, fill_value=-5) # doctest: +SKIP A B 0 0 -5.0 1 0 4.0
However, if the same element in both dataframes is None, that None is preserved
>>> df1 = pd.DataFrame({'A': [0, 0], 'B': [None, 4]}) # doctest: +SKIP >>> df2 = pd.DataFrame({'A': [1, 1], 'B': [None, 3]}) # doctest: +SKIP >>> df1.combine(df2, take_smaller, fill_value=-5) # doctest: +SKIP A B 0 0 -5.0 1 0 3.0
Example that demonstrates the use of overwrite and behavior when the axis differ between the dataframes.
>>> df1 = pd.DataFrame({'A': [0, 0], 'B': [4, 4]}) # doctest: +SKIP >>> df2 = pd.DataFrame({'B': [3, 3], 'C': [-10, 1], }, index=[1, 2]) # doctest: +SKIP >>> df1.combine(df2, take_smaller) # doctest: +SKIP A B C 0 NaN NaN NaN 1 NaN 3.0 -10.0 2 NaN 3.0 1.0
>>> df1.combine(df2, take_smaller, overwrite=False) # doctest: +SKIP A B C 0 0.0 NaN NaN 1 0.0 3.0 -10.0 2 NaN 3.0 1.0
Demonstrating the preference of the passed in dataframe.
>>> df2 = pd.DataFrame({'B': [3, 3], 'C': [1, 1], }, index=[1, 2]) # doctest: +SKIP >>> df2.combine(df1, take_smaller) # doctest: +SKIP A B C 0 0.0 NaN NaN 1 0.0 3.0 NaN 2 NaN 3.0 NaN
>>> df2.combine(df1, take_smaller, overwrite=False) # doctest: +SKIP A B C 0 0.0 NaN NaN 1 0.0 3.0 1.0 2 NaN 3.0 1.0
-
combine_first
(other)¶ Update null elements with value in the same location in other.
This docstring was copied from pandas.core.frame.DataFrame.combine_first.
Some inconsistencies with the Dask version may exist.
Combine two DataFrame objects by filling null values in one DataFrame with non-null values from other DataFrame. The row and column indexes of the resulting DataFrame will be the union of the two.
Parameters: other : DataFrame
Provided DataFrame to use to fill null values.
Returns: DataFrame
See also
DataFrame.combine
- Perform series-wise operation on two DataFrames using a given function.
Examples
>>> df1 = pd.DataFrame({'A': [None, 0], 'B': [None, 4]}) # doctest: +SKIP >>> df2 = pd.DataFrame({'A': [1, 1], 'B': [3, 3]}) # doctest: +SKIP >>> df1.combine_first(df2) # doctest: +SKIP A B 0 1.0 3.0 1 0.0 4.0
Null values still persist if the location of that null value does not exist in other
>>> df1 = pd.DataFrame({'A': [None, 0], 'B': [4, None]}) # doctest: +SKIP >>> df2 = pd.DataFrame({'B': [3, 3], 'C': [1, 1]}, index=[1, 2]) # doctest: +SKIP >>> df1.combine_first(df2) # doctest: +SKIP A B C 0 NaN 4.0 NaN 1 0.0 3.0 1.0 2 NaN 3.0 1.0
-
compute
(**kwargs)¶ Compute this dask collection
This turns a lazy Dask collection into its in-memory equivalent. For example a Dask.array turns into a
numpy.array()
and a Dask.dataframe turns into a Pandas dataframe. The entire dataset must fit into memory before calling this operation.Parameters: scheduler : string, optional
Which scheduler to use like “threads”, “synchronous” or “processes”. If not provided, the default is to check the global settings first, and then fall back to the collection defaults.
optimize_graph : bool, optional
If True [default], the graph is optimized before computation. Otherwise the graph is run as is. This can be useful for debugging.
kwargs
Extra keywords to forward to the scheduler function.
See also
dask.base.compute
-
copy
()¶ Make a copy of the dataframe
This is strictly a shallow copy of the underlying computational graph. It does not affect the underlying data
-
corr
(method='pearson', min_periods=None, split_every=False)¶ Compute pairwise correlation of columns, excluding NA/null values.
This docstring was copied from pandas.core.frame.DataFrame.corr.
Some inconsistencies with the Dask version may exist.
Parameters: method : {‘pearson’, ‘kendall’, ‘spearman’} or callable
- pearson : standard correlation coefficient
- kendall : Kendall Tau correlation coefficient
- spearman : Spearman rank correlation
- callable: callable with input two 1d ndarrays
- and returning a float. Note that the returned matrix from corr will have 1 along the diagonals and will be symmetric regardless of the callable’s behavior .. versionadded:: 0.24.0
min_periods : int, optional
Minimum number of observations required per pair of columns to have a valid result. Currently only available for Pearson and Spearman correlation.
Returns: DataFrame
Correlation matrix.
See also
DataFrame.corrwith
,Series.corr
Examples
>>> def histogram_intersection(a, b): # doctest: +SKIP ... v = np.minimum(a, b).sum().round(decimals=1) ... return v >>> df = pd.DataFrame([(.2, .3), (.0, .6), (.6, .0), (.2, .1)], # doctest: +SKIP ... columns=['dogs', 'cats']) >>> df.corr(method=histogram_intersection) # doctest: +SKIP dogs cats dogs 1.0 0.3 cats 0.3 1.0
-
count
(axis=None, split_every=False)¶ Count non-NA cells for each column or row.
This docstring was copied from pandas.core.frame.DataFrame.count.
Some inconsistencies with the Dask version may exist.
The values None, NaN, NaT, and optionally numpy.inf (depending on pandas.options.mode.use_inf_as_na) are considered NA.
Parameters: axis : {0 or ‘index’, 1 or ‘columns’}, default 0
If 0 or ‘index’ counts are generated for each column. If 1 or ‘columns’ counts are generated for each row.
level : int or str, optional (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a DataFrame. A str specifies the level name.
numeric_only : bool, default False (Not supported in Dask)
Include only float, int or boolean data.
Returns: Series or DataFrame
For each column/row the number of non-NA/null entries. If level is specified returns a DataFrame.
See also
Series.count
- Number of non-NA elements in a Series.
DataFrame.shape
- Number of DataFrame rows and columns (including NA elements).
DataFrame.isna
- Boolean same-sized DataFrame showing places of NA elements.
Examples
Constructing DataFrame from a dictionary:
>>> df = pd.DataFrame({"Person": # doctest: +SKIP ... ["John", "Myla", "Lewis", "John", "Myla"], ... "Age": [24., np.nan, 21., 33, 26], ... "Single": [False, True, True, True, False]}) >>> df # doctest: +SKIP Person Age Single 0 John 24.0 False 1 Myla NaN True 2 Lewis 21.0 True 3 John 33.0 True 4 Myla 26.0 False
Notice the uncounted NA values:
>>> df.count() # doctest: +SKIP Person 5 Age 4 Single 5 dtype: int64
Counts for each row:
>>> df.count(axis='columns') # doctest: +SKIP 0 3 1 2 2 3 3 3 4 3 dtype: int64
Counts for one level of a MultiIndex:
>>> df.set_index(["Person", "Single"]).count(level="Person") # doctest: +SKIP Age Person John 2 Lewis 1 Myla 1
-
cov
(min_periods=None, split_every=False)¶ Compute pairwise covariance of columns, excluding NA/null values.
This docstring was copied from pandas.core.frame.DataFrame.cov.
Some inconsistencies with the Dask version may exist.
Compute the pairwise covariance among the series of a DataFrame. The returned data frame is the covariance matrix of the columns of the DataFrame.
Both NA and null values are automatically excluded from the calculation. (See the note below about bias from missing values.) A threshold can be set for the minimum number of observations for each value created. Comparisons with observations below this threshold will be returned as
NaN
.This method is generally used for the analysis of time series data to understand the relationship between different measures across time.
Parameters: min_periods : int, optional
Minimum number of observations required per pair of columns to have a valid result.
Returns: DataFrame
The covariance matrix of the series of the DataFrame.
See also
Series.cov
- Compute covariance with another Series.
core.window.EWM.cov
- Exponential weighted sample covariance.
core.window.Expanding.cov
- Expanding sample covariance.
core.window.Rolling.cov
- Rolling sample covariance.
Notes
Returns the covariance matrix of the DataFrame’s time series. The covariance is normalized by N-1.
For DataFrames that have Series that are missing data (assuming that data is missing at random) the returned covariance matrix will be an unbiased estimate of the variance and covariance between the member Series.
However, for many applications this estimate may not be acceptable because the estimate covariance matrix is not guaranteed to be positive semi-definite. This could lead to estimate correlations having absolute values which are greater than one, and/or a non-invertible covariance matrix. See Estimation of covariance matrices for more details.
Examples
>>> df = pd.DataFrame([(1, 2), (0, 3), (2, 0), (1, 1)], # doctest: +SKIP ... columns=['dogs', 'cats']) >>> df.cov() # doctest: +SKIP dogs cats dogs 0.666667 -1.000000 cats -1.000000 1.666667
>>> np.random.seed(42) # doctest: +SKIP >>> df = pd.DataFrame(np.random.randn(1000, 5), # doctest: +SKIP ... columns=['a', 'b', 'c', 'd', 'e']) >>> df.cov() # doctest: +SKIP a b c d e a 0.998438 -0.020161 0.059277 -0.008943 0.014144 b -0.020161 1.059352 -0.008543 -0.024738 0.009826 c 0.059277 -0.008543 1.010670 -0.001486 -0.000271 d -0.008943 -0.024738 -0.001486 0.921297 -0.013692 e 0.014144 0.009826 -0.000271 -0.013692 0.977795
Minimum number of periods
This method also supports an optional
min_periods
keyword that specifies the required minimum number of non-NA observations for each column pair in order to have a valid result:>>> np.random.seed(42) # doctest: +SKIP >>> df = pd.DataFrame(np.random.randn(20, 3), # doctest: +SKIP ... columns=['a', 'b', 'c']) >>> df.loc[df.index[:5], 'a'] = np.nan # doctest: +SKIP >>> df.loc[df.index[5:10], 'b'] = np.nan # doctest: +SKIP >>> df.cov(min_periods=12) # doctest: +SKIP a b c a 0.316741 NaN -0.150812 b NaN 1.248003 0.191417 c -0.150812 0.191417 0.895202
-
cummax
(axis=None, skipna=True, out=None)¶ Return cumulative maximum over a DataFrame or Series axis.
This docstring was copied from pandas.core.frame.DataFrame.cummax.
Some inconsistencies with the Dask version may exist.
Returns a DataFrame or Series of the same size containing the cumulative maximum.
Parameters: axis : {0 or ‘index’, 1 or ‘columns’}, default 0
The index or the name of the axis. 0 is equivalent to None or ‘index’.
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA.
*args, **kwargs :
Additional keywords have no effect but might be accepted for compatibility with NumPy.
Returns: Series or DataFrame
See also
core.window.Expanding.max
- Similar functionality but ignores
NaN
values. DataFrame.max
- Return the maximum over DataFrame axis.
DataFrame.cummax
- Return cumulative maximum over DataFrame axis.
DataFrame.cummin
- Return cumulative minimum over DataFrame axis.
DataFrame.cumsum
- Return cumulative sum over DataFrame axis.
DataFrame.cumprod
- Return cumulative product over DataFrame axis.
Examples
Series
>>> s = pd.Series([2, np.nan, 5, -1, 0]) # doctest: +SKIP >>> s # doctest: +SKIP 0 2.0 1 NaN 2 5.0 3 -1.0 4 0.0 dtype: float64
By default, NA values are ignored.
>>> s.cummax() # doctest: +SKIP 0 2.0 1 NaN 2 5.0 3 5.0 4 5.0 dtype: float64
To include NA values in the operation, use
skipna=False
>>> s.cummax(skipna=False) # doctest: +SKIP 0 2.0 1 NaN 2 NaN 3 NaN 4 NaN dtype: float64
DataFrame
>>> df = pd.DataFrame([[2.0, 1.0], # doctest: +SKIP ... [3.0, np.nan], ... [1.0, 0.0]], ... columns=list('AB')) >>> df # doctest: +SKIP A B 0 2.0 1.0 1 3.0 NaN 2 1.0 0.0
By default, iterates over rows and finds the maximum in each column. This is equivalent to
axis=None
oraxis='index'
.>>> df.cummax() # doctest: +SKIP A B 0 2.0 1.0 1 3.0 NaN 2 3.0 1.0
To iterate over columns and find the maximum in each row, use
axis=1
>>> df.cummax(axis=1) # doctest: +SKIP A B 0 2.0 2.0 1 3.0 NaN 2 1.0 1.0
-
cummin
(axis=None, skipna=True, out=None)¶ Return cumulative minimum over a DataFrame or Series axis.
This docstring was copied from pandas.core.frame.DataFrame.cummin.
Some inconsistencies with the Dask version may exist.
Returns a DataFrame or Series of the same size containing the cumulative minimum.
Parameters: axis : {0 or ‘index’, 1 or ‘columns’}, default 0
The index or the name of the axis. 0 is equivalent to None or ‘index’.
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA.
*args, **kwargs :
Additional keywords have no effect but might be accepted for compatibility with NumPy.
Returns: Series or DataFrame
See also
core.window.Expanding.min
- Similar functionality but ignores
NaN
values. DataFrame.min
- Return the minimum over DataFrame axis.
DataFrame.cummax
- Return cumulative maximum over DataFrame axis.
DataFrame.cummin
- Return cumulative minimum over DataFrame axis.
DataFrame.cumsum
- Return cumulative sum over DataFrame axis.
DataFrame.cumprod
- Return cumulative product over DataFrame axis.
Examples
Series
>>> s = pd.Series([2, np.nan, 5, -1, 0]) # doctest: +SKIP >>> s # doctest: +SKIP 0 2.0 1 NaN 2 5.0 3 -1.0 4 0.0 dtype: float64
By default, NA values are ignored.
>>> s.cummin() # doctest: +SKIP 0 2.0 1 NaN 2 2.0 3 -1.0 4 -1.0 dtype: float64
To include NA values in the operation, use
skipna=False
>>> s.cummin(skipna=False) # doctest: +SKIP 0 2.0 1 NaN 2 NaN 3 NaN 4 NaN dtype: float64
DataFrame
>>> df = pd.DataFrame([[2.0, 1.0], # doctest: +SKIP ... [3.0, np.nan], ... [1.0, 0.0]], ... columns=list('AB')) >>> df # doctest: +SKIP A B 0 2.0 1.0 1 3.0 NaN 2 1.0 0.0
By default, iterates over rows and finds the minimum in each column. This is equivalent to
axis=None
oraxis='index'
.>>> df.cummin() # doctest: +SKIP A B 0 2.0 1.0 1 2.0 NaN 2 1.0 0.0
To iterate over columns and find the minimum in each row, use
axis=1
>>> df.cummin(axis=1) # doctest: +SKIP A B 0 2.0 1.0 1 3.0 NaN 2 1.0 0.0
-
cumprod
(axis=None, skipna=True, dtype=None, out=None)¶ Return cumulative product over a DataFrame or Series axis.
This docstring was copied from pandas.core.frame.DataFrame.cumprod.
Some inconsistencies with the Dask version may exist.
Returns a DataFrame or Series of the same size containing the cumulative product.
Parameters: axis : {0 or ‘index’, 1 or ‘columns’}, default 0
The index or the name of the axis. 0 is equivalent to None or ‘index’.
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA.
*args, **kwargs :
Additional keywords have no effect but might be accepted for compatibility with NumPy.
Returns: Series or DataFrame
See also
core.window.Expanding.prod
- Similar functionality but ignores
NaN
values. DataFrame.prod
- Return the product over DataFrame axis.
DataFrame.cummax
- Return cumulative maximum over DataFrame axis.
DataFrame.cummin
- Return cumulative minimum over DataFrame axis.
DataFrame.cumsum
- Return cumulative sum over DataFrame axis.
DataFrame.cumprod
- Return cumulative product over DataFrame axis.
Examples
Series
>>> s = pd.Series([2, np.nan, 5, -1, 0]) # doctest: +SKIP >>> s # doctest: +SKIP 0 2.0 1 NaN 2 5.0 3 -1.0 4 0.0 dtype: float64
By default, NA values are ignored.
>>> s.cumprod() # doctest: +SKIP 0 2.0 1 NaN 2 10.0 3 -10.0 4 -0.0 dtype: float64
To include NA values in the operation, use
skipna=False
>>> s.cumprod(skipna=False) # doctest: +SKIP 0 2.0 1 NaN 2 NaN 3 NaN 4 NaN dtype: float64
DataFrame
>>> df = pd.DataFrame([[2.0, 1.0], # doctest: +SKIP ... [3.0, np.nan], ... [1.0, 0.0]], ... columns=list('AB')) >>> df # doctest: +SKIP A B 0 2.0 1.0 1 3.0 NaN 2 1.0 0.0
By default, iterates over rows and finds the product in each column. This is equivalent to
axis=None
oraxis='index'
.>>> df.cumprod() # doctest: +SKIP A B 0 2.0 1.0 1 6.0 NaN 2 6.0 0.0
To iterate over columns and find the product in each row, use
axis=1
>>> df.cumprod(axis=1) # doctest: +SKIP A B 0 2.0 2.0 1 3.0 NaN 2 1.0 0.0
-
cumsum
(axis=None, skipna=True, dtype=None, out=None)¶ Return cumulative sum over a DataFrame or Series axis.
This docstring was copied from pandas.core.frame.DataFrame.cumsum.
Some inconsistencies with the Dask version may exist.
Returns a DataFrame or Series of the same size containing the cumulative sum.
Parameters: axis : {0 or ‘index’, 1 or ‘columns’}, default 0
The index or the name of the axis. 0 is equivalent to None or ‘index’.
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA.
*args, **kwargs :
Additional keywords have no effect but might be accepted for compatibility with NumPy.
Returns: Series or DataFrame
See also
core.window.Expanding.sum
- Similar functionality but ignores
NaN
values. DataFrame.sum
- Return the sum over DataFrame axis.
DataFrame.cummax
- Return cumulative maximum over DataFrame axis.
DataFrame.cummin
- Return cumulative minimum over DataFrame axis.
DataFrame.cumsum
- Return cumulative sum over DataFrame axis.
DataFrame.cumprod
- Return cumulative product over DataFrame axis.
Examples
Series
>>> s = pd.Series([2, np.nan, 5, -1, 0]) # doctest: +SKIP >>> s # doctest: +SKIP 0 2.0 1 NaN 2 5.0 3 -1.0 4 0.0 dtype: float64
By default, NA values are ignored.
>>> s.cumsum() # doctest: +SKIP 0 2.0 1 NaN 2 7.0 3 6.0 4 6.0 dtype: float64
To include NA values in the operation, use
skipna=False
>>> s.cumsum(skipna=False) # doctest: +SKIP 0 2.0 1 NaN 2 NaN 3 NaN 4 NaN dtype: float64
DataFrame
>>> df = pd.DataFrame([[2.0, 1.0], # doctest: +SKIP ... [3.0, np.nan], ... [1.0, 0.0]], ... columns=list('AB')) >>> df # doctest: +SKIP A B 0 2.0 1.0 1 3.0 NaN 2 1.0 0.0
By default, iterates over rows and finds the sum in each column. This is equivalent to
axis=None
oraxis='index'
.>>> df.cumsum() # doctest: +SKIP A B 0 2.0 1.0 1 5.0 NaN 2 6.0 1.0
To iterate over columns and find the sum in each row, use
axis=1
>>> df.cumsum(axis=1) # doctest: +SKIP A B 0 2.0 3.0 1 3.0 NaN 2 1.0 1.0
-
describe
(split_every=False, percentiles=None, percentiles_method='default', include=None, exclude=None)¶ Generate descriptive statistics that summarize the central tendency, dispersion and shape of a dataset’s distribution, excluding
NaN
values.This docstring was copied from pandas.core.frame.DataFrame.describe.
Some inconsistencies with the Dask version may exist.
Analyzes both numeric and object series, as well as
DataFrame
column sets of mixed data types. The output will vary depending on what is provided. Refer to the notes below for more detail.Parameters: percentiles : list-like of numbers, optional
The percentiles to include in the output. All should fall between 0 and 1. The default is
[.25, .5, .75]
, which returns the 25th, 50th, and 75th percentiles.include : ‘all’, list-like of dtypes or None (default), optional
A white list of data types to include in the result. Ignored for
Series
. Here are the options:- ‘all’ : All columns of the input will be included in the output.
- A list-like of dtypes : Limits the results to the
provided data types.
To limit the result to numeric types submit
numpy.number
. To limit it instead to object columns submit thenumpy.object
data type. Strings can also be used in the style ofselect_dtypes
(e.g.df.describe(include=['O'])
). To select pandas categorical columns, use'category'
- None (default) : The result will include all numeric columns.
exclude : list-like of dtypes or None (default), optional,
A black list of data types to omit from the result. Ignored for
Series
. Here are the options:- A list-like of dtypes : Excludes the provided data types
from the result. To exclude numeric types submit
numpy.number
. To exclude object columns submit the data typenumpy.object
. Strings can also be used in the style ofselect_dtypes
(e.g.df.describe(include=['O'])
). To exclude pandas categorical columns, use'category'
- None (default) : The result will exclude nothing.
Returns: Series or DataFrame
Summary statistics of the Series or Dataframe provided.
See also
DataFrame.count
- Count number of non-NA/null observations.
DataFrame.max
- Maximum of the values in the object.
DataFrame.min
- Minimum of the values in the object.
DataFrame.mean
- Mean of the values.
DataFrame.std
- Standard deviation of the observations.
DataFrame.select_dtypes
- Subset of a DataFrame including/excluding columns based on their dtype.
Notes
For numeric data, the result’s index will include
count
,mean
,std
,min
,max
as well as lower,50
and upper percentiles. By default the lower percentile is25
and the upper percentile is75
. The50
percentile is the same as the median.For object data (e.g. strings or timestamps), the result’s index will include
count
,unique
,top
, andfreq
. Thetop
is the most common value. Thefreq
is the most common value’s frequency. Timestamps also include thefirst
andlast
items.If multiple object values have the highest count, then the
count
andtop
results will be arbitrarily chosen from among those with the highest count.For mixed data types provided via a
DataFrame
, the default is to return only an analysis of numeric columns. If the dataframe consists only of object and categorical data without any numeric columns, the default is to return an analysis of both the object and categorical columns. Ifinclude='all'
is provided as an option, the result will include a union of attributes of each type.The include and exclude parameters can be used to limit which columns in a
DataFrame
are analyzed for the output. The parameters are ignored when analyzing aSeries
.Examples
Describing a numeric
Series
.>>> s = pd.Series([1, 2, 3]) # doctest: +SKIP >>> s.describe() # doctest: +SKIP count 3.0 mean 2.0 std 1.0 min 1.0 25% 1.5 50% 2.0 75% 2.5 max 3.0 dtype: float64
Describing a categorical
Series
.>>> s = pd.Series(['a', 'a', 'b', 'c']) # doctest: +SKIP >>> s.describe() # doctest: +SKIP count 4 unique 3 top a freq 2 dtype: object
Describing a timestamp
Series
.>>> s = pd.Series([ # doctest: +SKIP ... np.datetime64("2000-01-01"), ... np.datetime64("2010-01-01"), ... np.datetime64("2010-01-01") ... ]) >>> s.describe() # doctest: +SKIP count 3 unique 2 top 2010-01-01 00:00:00 freq 2 first 2000-01-01 00:00:00 last 2010-01-01 00:00:00 dtype: object
Describing a
DataFrame
. By default only numeric fields are returned.>>> df = pd.DataFrame({'categorical': pd.Categorical(['d','e','f']), # doctest: +SKIP ... 'numeric': [1, 2, 3], ... 'object': ['a', 'b', 'c'] ... }) >>> df.describe() # doctest: +SKIP numeric count 3.0 mean 2.0 std 1.0 min 1.0 25% 1.5 50% 2.0 75% 2.5 max 3.0
Describing all columns of a
DataFrame
regardless of data type.>>> df.describe(include='all') # doctest: +SKIP categorical numeric object count 3 3.0 3 unique 3 NaN 3 top f NaN c freq 1 NaN 1 mean NaN 2.0 NaN std NaN 1.0 NaN min NaN 1.0 NaN 25% NaN 1.5 NaN 50% NaN 2.0 NaN 75% NaN 2.5 NaN max NaN 3.0 NaN
Describing a column from a
DataFrame
by accessing it as an attribute.>>> df.numeric.describe() # doctest: +SKIP count 3.0 mean 2.0 std 1.0 min 1.0 25% 1.5 50% 2.0 75% 2.5 max 3.0 Name: numeric, dtype: float64
Including only numeric columns in a
DataFrame
description.>>> df.describe(include=[np.number]) # doctest: +SKIP numeric count 3.0 mean 2.0 std 1.0 min 1.0 25% 1.5 50% 2.0 75% 2.5 max 3.0
Including only string columns in a
DataFrame
description.>>> df.describe(include=[np.object]) # doctest: +SKIP object count 3 unique 3 top c freq 1
Including only categorical columns from a
DataFrame
description.>>> df.describe(include=['category']) # doctest: +SKIP categorical count 3 unique 3 top f freq 1
Excluding numeric columns from a
DataFrame
description.>>> df.describe(exclude=[np.number]) # doctest: +SKIP categorical object count 3 3 unique 3 3 top f c freq 1 1
Excluding object columns from a
DataFrame
description.>>> df.describe(exclude=[np.object]) # doctest: +SKIP categorical numeric count 3 3.0 unique 3 NaN top f NaN freq 1 NaN mean NaN 2.0 std NaN 1.0 min NaN 1.0 25% NaN 1.5 50% NaN 2.0 75% NaN 2.5 max NaN 3.0
-
diff
(periods=1, axis=0)¶ First discrete difference of element.
This docstring was copied from pandas.core.frame.DataFrame.diff.
Some inconsistencies with the Dask version may exist.
Note
Pandas currently uses an
object
-dtype column to represent boolean data with missing values. This can cause issues for boolean-specific operations, like|
. To enable boolean- specific operations, at the cost of metadata that doesn’t match pandas, use.astype(bool)
after theshift
.Calculates the difference of a DataFrame element compared with another element in the DataFrame (default is the element in the same column of the previous row).
Parameters: periods : int, default 1
Periods to shift for calculating difference, accepts negative values.
axis : {0 or ‘index’, 1 or ‘columns’}, default 0
Take difference over rows (0) or columns (1).
New in version 0.16.1..
Returns: DataFrame
See also
Series.diff
- First discrete difference for a Series.
DataFrame.pct_change
- Percent change over given number of periods.
DataFrame.shift
- Shift index by desired number of periods with an optional time freq.
Examples
Difference with previous row
>>> df = pd.DataFrame({'a': [1, 2, 3, 4, 5, 6], # doctest: +SKIP ... 'b': [1, 1, 2, 3, 5, 8], ... 'c': [1, 4, 9, 16, 25, 36]}) >>> df # doctest: +SKIP a b c 0 1 1 1 1 2 1 4 2 3 2 9 3 4 3 16 4 5 5 25 5 6 8 36
>>> df.diff() # doctest: +SKIP a b c 0 NaN NaN NaN 1 1.0 0.0 3.0 2 1.0 1.0 5.0 3 1.0 1.0 7.0 4 1.0 2.0 9.0 5 1.0 3.0 11.0
Difference with previous column
>>> df.diff(axis=1) # doctest: +SKIP a b c 0 NaN 0.0 0.0 1 NaN -1.0 3.0 2 NaN -1.0 7.0 3 NaN -1.0 13.0 4 NaN 0.0 20.0 5 NaN 2.0 28.0
Difference with 3rd previous row
>>> df.diff(periods=3) # doctest: +SKIP a b c 0 NaN NaN NaN 1 NaN NaN NaN 2 NaN NaN NaN 3 3.0 2.0 15.0 4 3.0 4.0 21.0 5 3.0 6.0 27.0
Difference with following row
>>> df.diff(periods=-1) # doctest: +SKIP a b c 0 -1.0 0.0 -3.0 1 -1.0 -1.0 -5.0 2 -1.0 -1.0 -7.0 3 -1.0 -2.0 -9.0 4 -1.0 -3.0 -11.0 5 NaN NaN NaN
-
div
(other, axis='columns', level=None, fill_value=None)¶ Get Floating division of dataframe and other, element-wise (binary operator truediv).
This docstring was copied from pandas.core.frame.DataFrame.div.
Some inconsistencies with the Dask version may exist.
Equivalent to
dataframe / other
, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, rtruediv.Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
fill_value : float or None, default None
Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.
Returns: DataFrame
Result of the arithmetic operation.
See also
DataFrame.add
- Add DataFrames.
DataFrame.sub
- Subtract DataFrames.
DataFrame.mul
- Multiply DataFrames.
DataFrame.div
- Divide DataFrames (float division).
DataFrame.truediv
- Divide DataFrames (float division).
DataFrame.floordiv
- Divide DataFrames (integer division).
DataFrame.mod
- Calculate modulo (remainder after division).
DataFrame.pow
- Calculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4], # doctest: +SKIP ... 'degrees': [360, 180, 360]}, ... index=['circle', 'triangle', 'rectangle']) >>> df # doctest: +SKIP angles degrees circle 0 360 triangle 3 180 rectangle 4 360
Add a scalar with operator version which return the same results.
>>> df + 1 # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
>>> df.add(1) # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10) # doctest: +SKIP angles degrees circle 0.0 36.0 triangle 0.3 18.0 rectangle 0.4 36.0
>>> df.rdiv(10) # doctest: +SKIP angles degrees circle inf 0.027778 triangle 3.333333 0.055556 rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2] # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub([1, 2], axis='columns') # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']), # doctest: +SKIP ... axis='index') angles degrees circle -1 359 triangle 2 179 rectangle 3 359
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]}, # doctest: +SKIP ... index=['circle', 'triangle', 'rectangle']) >>> other # doctest: +SKIP angles circle 0 triangle 3 rectangle 4
>>> df * other # doctest: +SKIP angles degrees circle 0 NaN triangle 9 NaN rectangle 16 NaN
>>> df.mul(other, fill_value=0) # doctest: +SKIP angles degrees circle 0 0.0 triangle 9 0.0 rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6], # doctest: +SKIP ... 'degrees': [360, 180, 360, 360, 540, 720]}, ... index=[['A', 'A', 'A', 'B', 'B', 'B'], ... ['circle', 'triangle', 'rectangle', ... 'square', 'pentagon', 'hexagon']]) >>> df_multindex # doctest: +SKIP angles degrees A circle 0 360 triangle 3 180 rectangle 4 360 B square 4 360 pentagon 5 540 hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0) # doctest: +SKIP angles degrees A circle NaN 1.0 triangle 1.0 1.0 rectangle 1.0 1.0 B square 0.0 0.0 pentagon 0.0 0.0 hexagon 0.0 0.0
-
divide
(other, axis='columns', level=None, fill_value=None)¶ Get Floating division of dataframe and other, element-wise (binary operator truediv).
This docstring was copied from pandas.core.frame.DataFrame.divide.
Some inconsistencies with the Dask version may exist.
Equivalent to
dataframe / other
, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, rtruediv.Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
fill_value : float or None, default None
Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.
Returns: DataFrame
Result of the arithmetic operation.
See also
DataFrame.add
- Add DataFrames.
DataFrame.sub
- Subtract DataFrames.
DataFrame.mul
- Multiply DataFrames.
DataFrame.div
- Divide DataFrames (float division).
DataFrame.truediv
- Divide DataFrames (float division).
DataFrame.floordiv
- Divide DataFrames (integer division).
DataFrame.mod
- Calculate modulo (remainder after division).
DataFrame.pow
- Calculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4], # doctest: +SKIP ... 'degrees': [360, 180, 360]}, ... index=['circle', 'triangle', 'rectangle']) >>> df # doctest: +SKIP angles degrees circle 0 360 triangle 3 180 rectangle 4 360
Add a scalar with operator version which return the same results.
>>> df + 1 # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
>>> df.add(1) # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10) # doctest: +SKIP angles degrees circle 0.0 36.0 triangle 0.3 18.0 rectangle 0.4 36.0
>>> df.rdiv(10) # doctest: +SKIP angles degrees circle inf 0.027778 triangle 3.333333 0.055556 rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2] # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub([1, 2], axis='columns') # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']), # doctest: +SKIP ... axis='index') angles degrees circle -1 359 triangle 2 179 rectangle 3 359
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]}, # doctest: +SKIP ... index=['circle', 'triangle', 'rectangle']) >>> other # doctest: +SKIP angles circle 0 triangle 3 rectangle 4
>>> df * other # doctest: +SKIP angles degrees circle 0 NaN triangle 9 NaN rectangle 16 NaN
>>> df.mul(other, fill_value=0) # doctest: +SKIP angles degrees circle 0 0.0 triangle 9 0.0 rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6], # doctest: +SKIP ... 'degrees': [360, 180, 360, 360, 540, 720]}, ... index=[['A', 'A', 'A', 'B', 'B', 'B'], ... ['circle', 'triangle', 'rectangle', ... 'square', 'pentagon', 'hexagon']]) >>> df_multindex # doctest: +SKIP angles degrees A circle 0 360 triangle 3 180 rectangle 4 360 B square 4 360 pentagon 5 540 hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0) # doctest: +SKIP angles degrees A circle NaN 1.0 triangle 1.0 1.0 rectangle 1.0 1.0 B square 0.0 0.0 pentagon 0.0 0.0 hexagon 0.0 0.0
-
drop
(labels=None, axis=0, columns=None, errors='raise')¶ Drop specified labels from rows or columns.
This docstring was copied from pandas.core.frame.DataFrame.drop.
Some inconsistencies with the Dask version may exist.
Remove rows or columns by specifying label names and corresponding axis, or by specifying directly index or column names. When using a multi-index, labels on different levels can be removed by specifying the level.
Parameters: labels : single label or list-like
Index or column labels to drop.
axis : {0 or ‘index’, 1 or ‘columns’}, default 0
Whether to drop labels from the index (0 or ‘index’) or columns (1 or ‘columns’).
index : single label or list-like (Not supported in Dask)
Alternative to specifying axis (
labels, axis=0
is equivalent toindex=labels
).New in version 0.21.0.
columns : single label or list-like
Alternative to specifying axis (
labels, axis=1
is equivalent tocolumns=labels
).New in version 0.21.0.
level : int or level name, optional (Not supported in Dask)
For MultiIndex, level from which the labels will be removed.
inplace : bool, default False (Not supported in Dask)
If True, do operation inplace and return None.
errors : {‘ignore’, ‘raise’}, default ‘raise’
If ‘ignore’, suppress error and only existing labels are dropped.
Returns: DataFrame
DataFrame without the removed index or column labels.
Raises: KeyError
If any of the labels is not found in the selected axis.
See also
DataFrame.loc
- Label-location based indexer for selection by label.
DataFrame.dropna
- Return DataFrame with labels on given axis omitted where (all or any) data are missing.
DataFrame.drop_duplicates
- Return DataFrame with duplicate rows removed, optionally only considering certain columns.
Series.drop
- Return Series with specified index labels removed.
Examples
>>> df = pd.DataFrame(np.arange(12).reshape(3, 4), # doctest: +SKIP ... columns=['A', 'B', 'C', 'D']) >>> df # doctest: +SKIP A B C D 0 0 1 2 3 1 4 5 6 7 2 8 9 10 11
Drop columns
>>> df.drop(['B', 'C'], axis=1) # doctest: +SKIP A D 0 0 3 1 4 7 2 8 11
>>> df.drop(columns=['B', 'C']) # doctest: +SKIP A D 0 0 3 1 4 7 2 8 11
Drop a row by index
>>> df.drop([0, 1]) # doctest: +SKIP A B C D 2 8 9 10 11
Drop columns and/or rows of MultiIndex DataFrame
>>> midx = pd.MultiIndex(levels=[['lama', 'cow', 'falcon'], # doctest: +SKIP ... ['speed', 'weight', 'length']], ... codes=[[0, 0, 0, 1, 1, 1, 2, 2, 2], ... [0, 1, 2, 0, 1, 2, 0, 1, 2]]) >>> df = pd.DataFrame(index=midx, columns=['big', 'small'], # doctest: +SKIP ... data=[[45, 30], [200, 100], [1.5, 1], [30, 20], ... [250, 150], [1.5, 0.8], [320, 250], ... [1, 0.8], [0.3, 0.2]]) >>> df # doctest: +SKIP big small lama speed 45.0 30.0 weight 200.0 100.0 length 1.5 1.0 cow speed 30.0 20.0 weight 250.0 150.0 length 1.5 0.8 falcon speed 320.0 250.0 weight 1.0 0.8 length 0.3 0.2
>>> df.drop(index='cow', columns='small') # doctest: +SKIP big lama speed 45.0 weight 200.0 length 1.5 falcon speed 320.0 weight 1.0 length 0.3
>>> df.drop(index='length', level=1) # doctest: +SKIP big small lama speed 45.0 30.0 weight 200.0 100.0 cow speed 30.0 20.0 weight 250.0 150.0 falcon speed 320.0 250.0 weight 1.0 0.8
-
drop_duplicates
(subset=None, split_every=None, split_out=1, **kwargs)¶ Return DataFrame with duplicate rows removed, optionally only considering certain columns. Indexes, including time indexes are ignored.
This docstring was copied from pandas.core.frame.DataFrame.drop_duplicates.
Some inconsistencies with the Dask version may exist.
Parameters: subset : column label or sequence of labels, optional
Only consider certain columns for identifying duplicates, by default use all of the columns
keep : {‘first’, ‘last’, False}, default ‘first’ (Not supported in Dask)
first
: Drop duplicates except for the first occurrence.last
: Drop duplicates except for the last occurrence.- False : Drop all duplicates.
inplace : boolean, default False (Not supported in Dask)
Whether to drop duplicates in place or to return a copy
Returns: DataFrame
-
dropna
(how='any', subset=None, thresh=None)¶ Remove missing values.
This docstring was copied from pandas.core.frame.DataFrame.dropna.
Some inconsistencies with the Dask version may exist.
See the User Guide for more on which values are considered missing, and how to work with missing data.
Parameters: axis : {0 or ‘index’, 1 or ‘columns’}, default 0 (Not supported in Dask)
Determine if rows or columns which contain missing values are removed.
- 0, or ‘index’ : Drop rows which contain missing values.
- 1, or ‘columns’ : Drop columns which contain missing value.
Deprecated since version 0.23.0: Pass tuple or list to drop on multiple axes. Only a single axis is allowed.
how : {‘any’, ‘all’}, default ‘any’
Determine if row or column is removed from DataFrame, when we have at least one NA or all NA.
- ‘any’ : If any NA values are present, drop that row or column.
- ‘all’ : If all values are NA, drop that row or column.
thresh : int, optional
Require that many non-NA values.
subset : array-like, optional
Labels along other axis to consider, e.g. if you are dropping rows these would be a list of columns to include.
inplace : bool, default False (Not supported in Dask)
If True, do operation inplace and return None.
Returns: DataFrame
DataFrame with NA entries dropped from it.
See also
DataFrame.isna
- Indicate missing values.
DataFrame.notna
- Indicate existing (non-missing) values.
DataFrame.fillna
- Replace missing values.
Series.dropna
- Drop missing values.
Index.dropna
- Drop missing indices.
Examples
>>> df = pd.DataFrame({"name": ['Alfred', 'Batman', 'Catwoman'], # doctest: +SKIP ... "toy": [np.nan, 'Batmobile', 'Bullwhip'], ... "born": [pd.NaT, pd.Timestamp("1940-04-25"), ... pd.NaT]}) >>> df # doctest: +SKIP name toy born 0 Alfred NaN NaT 1 Batman Batmobile 1940-04-25 2 Catwoman Bullwhip NaT
Drop the rows where at least one element is missing.
>>> df.dropna() # doctest: +SKIP name toy born 1 Batman Batmobile 1940-04-25
Drop the columns where at least one element is missing.
>>> df.dropna(axis='columns') # doctest: +SKIP name 0 Alfred 1 Batman 2 Catwoman
Drop the rows where all elements are missing.
>>> df.dropna(how='all') # doctest: +SKIP name toy born 0 Alfred NaN NaT 1 Batman Batmobile 1940-04-25 2 Catwoman Bullwhip NaT
Keep only the rows with at least 2 non-NA values.
>>> df.dropna(thresh=2) # doctest: +SKIP name toy born 1 Batman Batmobile 1940-04-25 2 Catwoman Bullwhip NaT
Define in which columns to look for missing values.
>>> df.dropna(subset=['name', 'born']) # doctest: +SKIP name toy born 1 Batman Batmobile 1940-04-25
Keep the DataFrame with valid entries in the same variable.
>>> df.dropna(inplace=True) # doctest: +SKIP >>> df # doctest: +SKIP name toy born 1 Batman Batmobile 1940-04-25
-
dtypes
¶ Return data types
-
eq
(other, axis='columns', level=None)¶ Get Equal to of dataframe and other, element-wise (binary operator eq).
This docstring was copied from pandas.core.frame.DataFrame.eq.
Some inconsistencies with the Dask version may exist.
Among flexible wrappers (eq, ne, le, lt, ge, gt) to comparison operators.
Equivalent to ==, =!, <=, <, >=, > with support to choose axis (rows or columns) and level for comparison.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}, default ‘columns’
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’).
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: DataFrame of bool
Result of the comparison.
See also
DataFrame.eq
- Compare DataFrames for equality elementwise.
DataFrame.ne
- Compare DataFrames for inequality elementwise.
DataFrame.le
- Compare DataFrames for less than inequality or equality elementwise.
DataFrame.lt
- Compare DataFrames for strictly less than inequality elementwise.
DataFrame.ge
- Compare DataFrames for greater than inequality or equality elementwise.
DataFrame.gt
- Compare DataFrames for strictly greater than inequality elementwise.
Notes
Mismatched indices will be unioned together. NaN values are considered different (i.e. NaN != NaN).
Examples
>>> df = pd.DataFrame({'cost': [250, 150, 100], # doctest: +SKIP ... 'revenue': [100, 250, 300]}, ... index=['A', 'B', 'C']) >>> df # doctest: +SKIP cost revenue A 250 100 B 150 250 C 100 300
Comparison with a scalar, using either the operator or method:
>>> df == 100 # doctest: +SKIP cost revenue A False True B False False C True False
>>> df.eq(100) # doctest: +SKIP cost revenue A False True B False False C True False
When other is a
Series
, the columns of a DataFrame are aligned with the index of other and broadcast:>>> df != pd.Series([100, 250], index=["cost", "revenue"]) # doctest: +SKIP cost revenue A True True B True False C False True
Use the method to control the broadcast axis:
>>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index') # doctest: +SKIP cost revenue A True False B True True C True True D True True
When comparing to an arbitrary sequence, the number of columns must match the number elements in other:
>>> df == [250, 100] # doctest: +SKIP cost revenue A True True B False False C False False
Use the method to control the axis:
>>> df.eq([250, 250, 100], axis='index') # doctest: +SKIP cost revenue A True False B False True C True False
Compare to a DataFrame of different shape.
>>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]}, # doctest: +SKIP ... index=['A', 'B', 'C', 'D']) >>> other # doctest: +SKIP revenue A 300 B 250 C 100 D 150
>>> df.gt(other) # doctest: +SKIP cost revenue A False False B False False C False True D False False
Compare to a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220], # doctest: +SKIP ... 'revenue': [100, 250, 300, 200, 175, 225]}, ... index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'], ... ['A', 'B', 'C', 'A', 'B', 'C']]) >>> df_multindex # doctest: +SKIP cost revenue Q1 A 250 100 B 150 250 C 100 300 Q2 A 150 200 B 300 175 C 220 225
>>> df.le(df_multindex, level=1) # doctest: +SKIP cost revenue Q1 A True True B True True C True True Q2 A False True B True False C True False
-
eval
(expr, inplace=None, **kwargs)¶ Evaluate a string describing operations on DataFrame columns.
This docstring was copied from pandas.core.frame.DataFrame.eval.
Some inconsistencies with the Dask version may exist.
Operates on columns only, not specific rows or elements. This allows eval to run arbitrary code, which can make you vulnerable to code injection if you pass user input to this function.
Parameters: expr : str
The expression string to evaluate.
inplace : bool, default False
If the expression contains an assignment, whether to perform the operation inplace and mutate the existing DataFrame. Otherwise, a new DataFrame is returned.
New in version 0.18.0..
kwargs : dict
Returns: ndarray, scalar, or pandas object
The result of the evaluation.
See also
DataFrame.query
- Evaluates a boolean expression to query the columns of a frame.
DataFrame.assign
- Can evaluate an expression or function to create new values for a column.
eval
- Evaluate a Python expression as a string using various backends.
Notes
For more details see the API documentation for
eval()
. For detailed examples see enhancing performance with eval.Examples
>>> df = pd.DataFrame({'A': range(1, 6), 'B': range(10, 0, -2)}) # doctest: +SKIP >>> df # doctest: +SKIP A B 0 1 10 1 2 8 2 3 6 3 4 4 4 5 2 >>> df.eval('A + B') # doctest: +SKIP 0 11 1 10 2 9 3 8 4 7 dtype: int64
Assignment is allowed though by default the original DataFrame is not modified.
>>> df.eval('C = A + B') # doctest: +SKIP A B C 0 1 10 11 1 2 8 10 2 3 6 9 3 4 4 8 4 5 2 7 >>> df # doctest: +SKIP A B 0 1 10 1 2 8 2 3 6 3 4 4 4 5 2
Use
inplace=True
to modify the original DataFrame.>>> df.eval('C = A + B', inplace=True) # doctest: +SKIP >>> df # doctest: +SKIP A B C 0 1 10 11 1 2 8 10 2 3 6 9 3 4 4 8 4 5 2 7
-
explode
(column)¶ Transform each element of a list-like to a row, replicating the index values.
This docstring was copied from pandas.core.frame.DataFrame.explode.
Some inconsistencies with the Dask version may exist.
New in version 0.25.0.
Parameters: column : str or tuple
Returns: DataFrame
Exploded lists to rows of the subset columns; index will be duplicated for these rows.
Raises: ValueError :
if columns of the frame are not unique.
See also
DataFrame.unstack
- Pivot a level of the (necessarily hierarchical) index labels
DataFrame.melt
- Unpivot a DataFrame from wide format to long format
Series.explode
- Explode a DataFrame from list-like columns to long format.
Notes
This routine will explode list-likes including lists, tuples, Series, and np.ndarray. The result dtype of the subset rows will be object. Scalars will be returned unchanged. Empty list-likes will result in a np.nan for that row.
Examples
>>> df = pd.DataFrame({'A': [[1, 2, 3], 'foo', [], [3, 4]], 'B': 1}) # doctest: +SKIP >>> df # doctest: +SKIP A B 0 [1, 2, 3] 1 1 foo 1 2 [] 1 3 [3, 4] 1
>>> df.explode('A') # doctest: +SKIP A B 0 1 1 0 2 1 0 3 1 1 foo 1 2 NaN 1 3 3 1 3 4 1
-
ffill
(axis=None, limit=None)¶ Synonym for
DataFrame.fillna()
withmethod='ffill'
.This docstring was copied from pandas.core.frame.DataFrame.ffill.
Some inconsistencies with the Dask version may exist.
Returns: %(klass)s
Object with missing values filled.
-
fillna
(value=None, method=None, limit=None, axis=None)¶ Fill NA/NaN values using the specified method.
This docstring was copied from pandas.core.frame.DataFrame.fillna.
Some inconsistencies with the Dask version may exist.
Parameters: value : scalar, dict, Series, or DataFrame
Value to use to fill holes (e.g. 0), alternately a dict/Series/DataFrame of values specifying which value to use for each index (for a Series) or column (for a DataFrame). Values not in the dict/Series/DataFrame will not be filled. This value cannot be a list.
method : {‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, None}, default None
Method to use for filling holes in reindexed Series pad / ffill: propagate last valid observation forward to next valid backfill / bfill: use next valid observation to fill gap.
axis : {0 or ‘index’, 1 or ‘columns’}
Axis along which to fill missing values.
inplace : bool, default False (Not supported in Dask)
If True, fill in-place. Note: this will modify any other views on this object (e.g., a no-copy slice for a column in a DataFrame).
limit : int, default None
If method is specified, this is the maximum number of consecutive NaN values to forward/backward fill. In other words, if there is a gap with more than this number of consecutive NaNs, it will only be partially filled. If method is not specified, this is the maximum number of entries along the entire axis where NaNs will be filled. Must be greater than 0 if not None.
downcast : dict, default is None (Not supported in Dask)
A dict of item->dtype of what to downcast if possible, or the string ‘infer’ which will try to downcast to an appropriate equal type (e.g. float64 to int64 if possible).
Returns: DataFrame
Object with missing values filled.
See also
interpolate
- Fill NaN values using interpolation.
reindex
- Conform object to new index.
asfreq
- Convert TimeSeries to specified frequency.
Examples
>>> df = pd.DataFrame([[np.nan, 2, np.nan, 0], # doctest: +SKIP ... [3, 4, np.nan, 1], ... [np.nan, np.nan, np.nan, 5], ... [np.nan, 3, np.nan, 4]], ... columns=list('ABCD')) >>> df # doctest: +SKIP A B C D 0 NaN 2.0 NaN 0 1 3.0 4.0 NaN 1 2 NaN NaN NaN 5 3 NaN 3.0 NaN 4
Replace all NaN elements with 0s.
>>> df.fillna(0) # doctest: +SKIP A B C D 0 0.0 2.0 0.0 0 1 3.0 4.0 0.0 1 2 0.0 0.0 0.0 5 3 0.0 3.0 0.0 4
We can also propagate non-null values forward or backward.
>>> df.fillna(method='ffill') # doctest: +SKIP A B C D 0 NaN 2.0 NaN 0 1 3.0 4.0 NaN 1 2 3.0 4.0 NaN 5 3 3.0 3.0 NaN 4
Replace all NaN elements in column ‘A’, ‘B’, ‘C’, and ‘D’, with 0, 1, 2, and 3 respectively.
>>> values = {'A': 0, 'B': 1, 'C': 2, 'D': 3} # doctest: +SKIP >>> df.fillna(value=values) # doctest: +SKIP A B C D 0 0.0 2.0 2.0 0 1 3.0 4.0 2.0 1 2 0.0 1.0 2.0 5 3 0.0 3.0 2.0 4
Only replace the first NaN element.
>>> df.fillna(value=values, limit=1) # doctest: +SKIP A B C D 0 0.0 2.0 2.0 0 1 3.0 4.0 NaN 1 2 NaN 1.0 NaN 5 3 NaN 3.0 NaN 4
-
first
(offset)¶ Convenience method for subsetting initial periods of time series data based on a date offset.
This docstring was copied from pandas.core.frame.DataFrame.first.
Some inconsistencies with the Dask version may exist.
Parameters: offset : string, DateOffset, dateutil.relativedelta
Returns: subset : same type as caller
Raises: TypeError
If the index is not a
DatetimeIndex
See also
last
- Select final periods of time series based on a date offset.
at_time
- Select values at a particular time of the day.
between_time
- Select values between particular times of the day.
Examples
>>> i = pd.date_range('2018-04-09', periods=4, freq='2D') # doctest: +SKIP >>> ts = pd.DataFrame({'A': [1,2,3,4]}, index=i) # doctest: +SKIP >>> ts # doctest: +SKIP A 2018-04-09 1 2018-04-11 2 2018-04-13 3 2018-04-15 4
Get the rows for the first 3 days:
>>> ts.first('3D') # doctest: +SKIP A 2018-04-09 1 2018-04-11 2
Notice the data for 3 first calender days were returned, not the first 3 days observed in the dataset, and therefore data for 2018-04-13 was not returned.
-
floordiv
(other, axis='columns', level=None, fill_value=None)¶ Get Integer division of dataframe and other, element-wise (binary operator floordiv).
This docstring was copied from pandas.core.frame.DataFrame.floordiv.
Some inconsistencies with the Dask version may exist.
Equivalent to
dataframe // other
, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, rfloordiv.Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
fill_value : float or None, default None
Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.
Returns: DataFrame
Result of the arithmetic operation.
See also
DataFrame.add
- Add DataFrames.
DataFrame.sub
- Subtract DataFrames.
DataFrame.mul
- Multiply DataFrames.
DataFrame.div
- Divide DataFrames (float division).
DataFrame.truediv
- Divide DataFrames (float division).
DataFrame.floordiv
- Divide DataFrames (integer division).
DataFrame.mod
- Calculate modulo (remainder after division).
DataFrame.pow
- Calculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4], # doctest: +SKIP ... 'degrees': [360, 180, 360]}, ... index=['circle', 'triangle', 'rectangle']) >>> df # doctest: +SKIP angles degrees circle 0 360 triangle 3 180 rectangle 4 360
Add a scalar with operator version which return the same results.
>>> df + 1 # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
>>> df.add(1) # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10) # doctest: +SKIP angles degrees circle 0.0 36.0 triangle 0.3 18.0 rectangle 0.4 36.0
>>> df.rdiv(10) # doctest: +SKIP angles degrees circle inf 0.027778 triangle 3.333333 0.055556 rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2] # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub([1, 2], axis='columns') # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']), # doctest: +SKIP ... axis='index') angles degrees circle -1 359 triangle 2 179 rectangle 3 359
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]}, # doctest: +SKIP ... index=['circle', 'triangle', 'rectangle']) >>> other # doctest: +SKIP angles circle 0 triangle 3 rectangle 4
>>> df * other # doctest: +SKIP angles degrees circle 0 NaN triangle 9 NaN rectangle 16 NaN
>>> df.mul(other, fill_value=0) # doctest: +SKIP angles degrees circle 0 0.0 triangle 9 0.0 rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6], # doctest: +SKIP ... 'degrees': [360, 180, 360, 360, 540, 720]}, ... index=[['A', 'A', 'A', 'B', 'B', 'B'], ... ['circle', 'triangle', 'rectangle', ... 'square', 'pentagon', 'hexagon']]) >>> df_multindex # doctest: +SKIP angles degrees A circle 0 360 triangle 3 180 rectangle 4 360 B square 4 360 pentagon 5 540 hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0) # doctest: +SKIP angles degrees A circle NaN 1.0 triangle 1.0 1.0 rectangle 1.0 1.0 B square 0.0 0.0 pentagon 0.0 0.0 hexagon 0.0 0.0
-
ge
(other, axis='columns', level=None)¶ Get Greater than or equal to of dataframe and other, element-wise (binary operator ge).
This docstring was copied from pandas.core.frame.DataFrame.ge.
Some inconsistencies with the Dask version may exist.
Among flexible wrappers (eq, ne, le, lt, ge, gt) to comparison operators.
Equivalent to ==, =!, <=, <, >=, > with support to choose axis (rows or columns) and level for comparison.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}, default ‘columns’
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’).
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: DataFrame of bool
Result of the comparison.
See also
DataFrame.eq
- Compare DataFrames for equality elementwise.
DataFrame.ne
- Compare DataFrames for inequality elementwise.
DataFrame.le
- Compare DataFrames for less than inequality or equality elementwise.
DataFrame.lt
- Compare DataFrames for strictly less than inequality elementwise.
DataFrame.ge
- Compare DataFrames for greater than inequality or equality elementwise.
DataFrame.gt
- Compare DataFrames for strictly greater than inequality elementwise.
Notes
Mismatched indices will be unioned together. NaN values are considered different (i.e. NaN != NaN).
Examples
>>> df = pd.DataFrame({'cost': [250, 150, 100], # doctest: +SKIP ... 'revenue': [100, 250, 300]}, ... index=['A', 'B', 'C']) >>> df # doctest: +SKIP cost revenue A 250 100 B 150 250 C 100 300
Comparison with a scalar, using either the operator or method:
>>> df == 100 # doctest: +SKIP cost revenue A False True B False False C True False
>>> df.eq(100) # doctest: +SKIP cost revenue A False True B False False C True False
When other is a
Series
, the columns of a DataFrame are aligned with the index of other and broadcast:>>> df != pd.Series([100, 250], index=["cost", "revenue"]) # doctest: +SKIP cost revenue A True True B True False C False True
Use the method to control the broadcast axis:
>>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index') # doctest: +SKIP cost revenue A True False B True True C True True D True True
When comparing to an arbitrary sequence, the number of columns must match the number elements in other:
>>> df == [250, 100] # doctest: +SKIP cost revenue A True True B False False C False False
Use the method to control the axis:
>>> df.eq([250, 250, 100], axis='index') # doctest: +SKIP cost revenue A True False B False True C True False
Compare to a DataFrame of different shape.
>>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]}, # doctest: +SKIP ... index=['A', 'B', 'C', 'D']) >>> other # doctest: +SKIP revenue A 300 B 250 C 100 D 150
>>> df.gt(other) # doctest: +SKIP cost revenue A False False B False False C False True D False False
Compare to a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220], # doctest: +SKIP ... 'revenue': [100, 250, 300, 200, 175, 225]}, ... index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'], ... ['A', 'B', 'C', 'A', 'B', 'C']]) >>> df_multindex # doctest: +SKIP cost revenue Q1 A 250 100 B 150 250 C 100 300 Q2 A 150 200 B 300 175 C 220 225
>>> df.le(df_multindex, level=1) # doctest: +SKIP cost revenue Q1 A True True B True True C True True Q2 A False True B True False C True False
-
get_dtype_counts
()¶ Return counts of unique dtypes in this object.
This docstring was copied from pandas.core.frame.DataFrame.get_dtype_counts.
Some inconsistencies with the Dask version may exist.
Deprecated since version 0.25.0.
Use .dtypes.value_counts() instead.
Returns: dtype : Series
Series with the count of columns with each dtype.
See also
dtypes
- Return the dtypes in this object.
Examples
>>> a = [['a', 1, 1.0], ['b', 2, 2.0], ['c', 3, 3.0]] # doctest: +SKIP >>> df = pd.DataFrame(a, columns=['str', 'int', 'float']) # doctest: +SKIP >>> df # doctest: +SKIP str int float 0 a 1 1.0 1 b 2 2.0 2 c 3 3.0
>>> df.get_dtype_counts() # doctest: +SKIP float64 1 int64 1 object 1 dtype: int64
-
get_ftype_counts
()¶ Return counts of unique ftypes in this object.
This docstring was copied from pandas.core.frame.DataFrame.get_ftype_counts.
Some inconsistencies with the Dask version may exist.
Deprecated since version 0.23.0.
This is useful for SparseDataFrame or for DataFrames containing sparse arrays.
Returns: dtype : Series
Series with the count of columns with each type and sparsity (dense/sparse).
See also
ftypes
- Return ftypes (indication of sparse/dense and dtype) in this object.
Examples
>>> a = [['a', 1, 1.0], ['b', 2, 2.0], ['c', 3, 3.0]] # doctest: +SKIP >>> df = pd.DataFrame(a, columns=['str', 'int', 'float']) # doctest: +SKIP >>> df # doctest: +SKIP str int float 0 a 1 1.0 1 b 2 2.0 2 c 3 3.0
>>> df.get_ftype_counts() # doctest: +SKIP float64:dense 1 int64:dense 1 object:dense 1 dtype: int64
-
get_partition
(n)¶ Get a dask DataFrame/Series representing the nth partition.
-
groupby
(by=None, **kwargs)¶ Group DataFrame or Series using a mapper or by a Series of columns.
This docstring was copied from pandas.core.frame.DataFrame.groupby.
Some inconsistencies with the Dask version may exist.
A groupby operation involves some combination of splitting the object, applying a function, and combining the results. This can be used to group large amounts of data and compute operations on these groups.
Parameters: by : mapping, function, label, or list of labels
Used to determine the groups for the groupby. If
by
is a function, it’s called on each value of the object’s index. If a dict or Series is passed, the Series or dict VALUES will be used to determine the groups (the Series’ values are first aligned; see.align()
method). If an ndarray is passed, the values are used as-is determine the groups. A label or list of labels may be passed to group by the columns inself
. Notice that a tuple is interpreted as a (single) key.axis : {0 or ‘index’, 1 or ‘columns’}, default 0 (Not supported in Dask)
Split along rows (0) or columns (1).
level : int, level name, or sequence of such, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), group by a particular level or levels.
as_index : bool, default True (Not supported in Dask)
For aggregated output, return object with group labels as the index. Only relevant for DataFrame input. as_index=False is effectively “SQL-style” grouped output.
sort : bool, default True (Not supported in Dask)
Sort group keys. Get better performance by turning this off. Note this does not influence the order of observations within each group. Groupby preserves the order of rows within each group.
group_keys : bool, default True (Not supported in Dask)
When calling apply, add group keys to index to identify pieces.
squeeze : bool, default False (Not supported in Dask)
Reduce the dimensionality of the return type if possible, otherwise return a consistent type.
observed : bool, default False (Not supported in Dask)
This only applies if any of the groupers are Categoricals. If True: only show observed values for categorical groupers. If False: show all values for categorical groupers.
New in version 0.23.0.
**kwargs
Optional, only accepts keyword argument ‘mutated’ and is passed to groupby.
Returns: DataFrameGroupBy or SeriesGroupBy
Depends on the calling object and returns groupby object that contains information about the groups.
See also
resample
- Convenience method for frequency conversion and resampling of time series.
Notes
See the user guide for more.
Examples
>>> df = pd.DataFrame({'Animal': ['Falcon', 'Falcon', # doctest: +SKIP ... 'Parrot', 'Parrot'], ... 'Max Speed': [380., 370., 24., 26.]}) >>> df # doctest: +SKIP Animal Max Speed 0 Falcon 380.0 1 Falcon 370.0 2 Parrot 24.0 3 Parrot 26.0 >>> df.groupby(['Animal']).mean() # doctest: +SKIP Max Speed Animal Falcon 375.0 Parrot 25.0
Hierarchical Indexes
We can groupby different levels of a hierarchical index using the level parameter:
>>> arrays = [['Falcon', 'Falcon', 'Parrot', 'Parrot'], # doctest: +SKIP ... ['Captive', 'Wild', 'Captive', 'Wild']] >>> index = pd.MultiIndex.from_arrays(arrays, names=('Animal', 'Type')) # doctest: +SKIP >>> df = pd.DataFrame({'Max Speed': [390., 350., 30., 20.]}, # doctest: +SKIP ... index=index) >>> df # doctest: +SKIP Max Speed Animal Type Falcon Captive 390.0 Wild 350.0 Parrot Captive 30.0 Wild 20.0 >>> df.groupby(level=0).mean() # doctest: +SKIP Max Speed Animal Falcon 370.0 Parrot 25.0 >>> df.groupby(level=1).mean() # doctest: +SKIP Max Speed Type Captive 210.0 Wild 185.0
-
gt
(other, axis='columns', level=None)¶ Get Greater than of dataframe and other, element-wise (binary operator gt).
This docstring was copied from pandas.core.frame.DataFrame.gt.
Some inconsistencies with the Dask version may exist.
Among flexible wrappers (eq, ne, le, lt, ge, gt) to comparison operators.
Equivalent to ==, =!, <=, <, >=, > with support to choose axis (rows or columns) and level for comparison.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}, default ‘columns’
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’).
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: DataFrame of bool
Result of the comparison.
See also
DataFrame.eq
- Compare DataFrames for equality elementwise.
DataFrame.ne
- Compare DataFrames for inequality elementwise.
DataFrame.le
- Compare DataFrames for less than inequality or equality elementwise.
DataFrame.lt
- Compare DataFrames for strictly less than inequality elementwise.
DataFrame.ge
- Compare DataFrames for greater than inequality or equality elementwise.
DataFrame.gt
- Compare DataFrames for strictly greater than inequality elementwise.
Notes
Mismatched indices will be unioned together. NaN values are considered different (i.e. NaN != NaN).
Examples
>>> df = pd.DataFrame({'cost': [250, 150, 100], # doctest: +SKIP ... 'revenue': [100, 250, 300]}, ... index=['A', 'B', 'C']) >>> df # doctest: +SKIP cost revenue A 250 100 B 150 250 C 100 300
Comparison with a scalar, using either the operator or method:
>>> df == 100 # doctest: +SKIP cost revenue A False True B False False C True False
>>> df.eq(100) # doctest: +SKIP cost revenue A False True B False False C True False
When other is a
Series
, the columns of a DataFrame are aligned with the index of other and broadcast:>>> df != pd.Series([100, 250], index=["cost", "revenue"]) # doctest: +SKIP cost revenue A True True B True False C False True
Use the method to control the broadcast axis:
>>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index') # doctest: +SKIP cost revenue A True False B True True C True True D True True
When comparing to an arbitrary sequence, the number of columns must match the number elements in other:
>>> df == [250, 100] # doctest: +SKIP cost revenue A True True B False False C False False
Use the method to control the axis:
>>> df.eq([250, 250, 100], axis='index') # doctest: +SKIP cost revenue A True False B False True C True False
Compare to a DataFrame of different shape.
>>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]}, # doctest: +SKIP ... index=['A', 'B', 'C', 'D']) >>> other # doctest: +SKIP revenue A 300 B 250 C 100 D 150
>>> df.gt(other) # doctest: +SKIP cost revenue A False False B False False C False True D False False
Compare to a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220], # doctest: +SKIP ... 'revenue': [100, 250, 300, 200, 175, 225]}, ... index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'], ... ['A', 'B', 'C', 'A', 'B', 'C']]) >>> df_multindex # doctest: +SKIP cost revenue Q1 A 250 100 B 150 250 C 100 300 Q2 A 150 200 B 300 175 C 220 225
>>> df.le(df_multindex, level=1) # doctest: +SKIP cost revenue Q1 A True True B True True C True True Q2 A False True B True False C True False
-
head
(n=5, npartitions=1, compute=True)¶ First n rows of the dataset
Parameters: n : int, optional
The number of rows to return. Default is 5.
npartitions : int, optional
Elements are only taken from the first
npartitions
, with a default of 1. If there are fewer thann
rows in the firstnpartitions
a warning will be raised and any found rows returned. Pass -1 to use all partitions.compute : bool, optional
Whether to compute the result, default is True.
-
idxmax
(axis=None, skipna=True, split_every=False)¶ Return index of first occurrence of maximum over requested axis. NA/null values are excluded.
This docstring was copied from pandas.core.frame.DataFrame.idxmax.
Some inconsistencies with the Dask version may exist.
Parameters: axis : {0 or ‘index’, 1 or ‘columns’}, default 0
0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA.
Returns: Series
Indexes of maxima along the specified axis.
Raises: ValueError
- If the row/column is empty
See also
Notes
This method is the DataFrame version of
ndarray.argmax
.
-
idxmin
(axis=None, skipna=True, split_every=False)¶ Return index of first occurrence of minimum over requested axis. NA/null values are excluded.
This docstring was copied from pandas.core.frame.DataFrame.idxmin.
Some inconsistencies with the Dask version may exist.
Parameters: axis : {0 or ‘index’, 1 or ‘columns’}, default 0
0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA.
Returns: Series
Indexes of minima along the specified axis.
Raises: ValueError
- If the row/column is empty
See also
Notes
This method is the DataFrame version of
ndarray.argmin
.
-
iloc
¶ Purely integer-location based indexing for selection by position.
Only indexing the column positions is supported. Trying to select row positions will raise a ValueError.
See Indexing into Dask DataFrames for more.
Examples
>>> df.iloc[:, [2, 0, 1]] # doctest: +SKIP
-
index
¶ Return dask Index instance
-
info
(buf=None, verbose=False, memory_usage=False)¶ Concise summary of a Dask DataFrame.
-
isin
(values)¶ Whether each element in the DataFrame is contained in values.
This docstring was copied from pandas.core.frame.DataFrame.isin.
Some inconsistencies with the Dask version may exist.
Parameters: values : iterable, Series, DataFrame or dict
The result will only be true at a location if all the labels match. If values is a Series, that’s the index. If values is a dict, the keys must be the column names, which must match. If values is a DataFrame, then both the index and column labels must match.
Returns: DataFrame
DataFrame of booleans showing whether each element in the DataFrame is contained in values.
See also
DataFrame.eq
- Equality test for DataFrame.
Series.isin
- Equivalent method on Series.
Series.str.contains
- Test if pattern or regex is contained within a string of a Series or Index.
Examples
>>> df = pd.DataFrame({'num_legs': [2, 4], 'num_wings': [2, 0]}, # doctest: +SKIP ... index=['falcon', 'dog']) >>> df # doctest: +SKIP num_legs num_wings falcon 2 2 dog 4 0
When
values
is a list check whether every value in the DataFrame is present in the list (which animals have 0 or 2 legs or wings)>>> df.isin([0, 2]) # doctest: +SKIP num_legs num_wings falcon True True dog False True
When
values
is a dict, we can pass values to check for each column separately:>>> df.isin({'num_wings': [0, 3]}) # doctest: +SKIP num_legs num_wings falcon False False dog False True
When
values
is a Series or DataFrame the index and column must match. Note that ‘falcon’ does not match based on the number of legs in df2.>>> other = pd.DataFrame({'num_legs': [8, 2], 'num_wings': [0, 2]}, # doctest: +SKIP ... index=['spider', 'falcon']) >>> df.isin(other) # doctest: +SKIP num_legs num_wings falcon True True dog False False
-
isna
()¶ Detect missing values.
This docstring was copied from pandas.core.frame.DataFrame.isna.
Some inconsistencies with the Dask version may exist.
Return a boolean same-sized object indicating if the values are NA. NA values, such as None or
numpy.NaN
, gets mapped to True values. Everything else gets mapped to False values. Characters such as empty strings''
ornumpy.inf
are not considered NA values (unless you setpandas.options.mode.use_inf_as_na = True
).Returns: DataFrame
Mask of bool values for each element in DataFrame that indicates whether an element is not an NA value.
See also
DataFrame.isnull
- Alias of isna.
DataFrame.notna
- Boolean inverse of isna.
DataFrame.dropna
- Omit axes labels with missing values.
isna
- Top-level isna.
Examples
Show which entries in a DataFrame are NA.
>>> df = pd.DataFrame({'age': [5, 6, np.NaN], # doctest: +SKIP ... 'born': [pd.NaT, pd.Timestamp('1939-05-27'), ... pd.Timestamp('1940-04-25')], ... 'name': ['Alfred', 'Batman', ''], ... 'toy': [None, 'Batmobile', 'Joker']}) >>> df # doctest: +SKIP age born name toy 0 5.0 NaT Alfred None 1 6.0 1939-05-27 Batman Batmobile 2 NaN 1940-04-25 Joker
>>> df.isna() # doctest: +SKIP age born name toy 0 False True False True 1 False False False False 2 True False False False
Show which entries in a Series are NA.
>>> ser = pd.Series([5, 6, np.NaN]) # doctest: +SKIP >>> ser # doctest: +SKIP 0 5.0 1 6.0 2 NaN dtype: float64
>>> ser.isna() # doctest: +SKIP 0 False 1 False 2 True dtype: bool
-
isnull
()¶ Detect missing values.
This docstring was copied from pandas.core.frame.DataFrame.isnull.
Some inconsistencies with the Dask version may exist.
Return a boolean same-sized object indicating if the values are NA. NA values, such as None or
numpy.NaN
, gets mapped to True values. Everything else gets mapped to False values. Characters such as empty strings''
ornumpy.inf
are not considered NA values (unless you setpandas.options.mode.use_inf_as_na = True
).Returns: DataFrame
Mask of bool values for each element in DataFrame that indicates whether an element is not an NA value.
See also
DataFrame.isnull
- Alias of isna.
DataFrame.notna
- Boolean inverse of isna.
DataFrame.dropna
- Omit axes labels with missing values.
isna
- Top-level isna.
Examples
Show which entries in a DataFrame are NA.
>>> df = pd.DataFrame({'age': [5, 6, np.NaN], # doctest: +SKIP ... 'born': [pd.NaT, pd.Timestamp('1939-05-27'), ... pd.Timestamp('1940-04-25')], ... 'name': ['Alfred', 'Batman', ''], ... 'toy': [None, 'Batmobile', 'Joker']}) >>> df # doctest: +SKIP age born name toy 0 5.0 NaT Alfred None 1 6.0 1939-05-27 Batman Batmobile 2 NaN 1940-04-25 Joker
>>> df.isna() # doctest: +SKIP age born name toy 0 False True False True 1 False False False False 2 True False False False
Show which entries in a Series are NA.
>>> ser = pd.Series([5, 6, np.NaN]) # doctest: +SKIP >>> ser # doctest: +SKIP 0 5.0 1 6.0 2 NaN dtype: float64
>>> ser.isna() # doctest: +SKIP 0 False 1 False 2 True dtype: bool
-
iterrows
()¶ Iterate over DataFrame rows as (index, Series) pairs.
This docstring was copied from pandas.core.frame.DataFrame.iterrows.
Some inconsistencies with the Dask version may exist.
Yields: index : label or tuple of label
The index of the row. A tuple for a MultiIndex.
data : Series
The data of the row as a Series.
it : generator
A generator that iterates over the rows of the frame.
See also
itertuples
- Iterate over DataFrame rows as namedtuples of the values.
items
- Iterate over (column name, Series) pairs.
Notes
Because
iterrows
returns a Series for each row, it does not preserve dtypes across the rows (dtypes are preserved across columns for DataFrames). For example,>>> df = pd.DataFrame([[1, 1.5]], columns=['int', 'float']) # doctest: +SKIP >>> row = next(df.iterrows())[1] # doctest: +SKIP >>> row # doctest: +SKIP int 1.0 float 1.5 Name: 0, dtype: float64 >>> print(row['int'].dtype) # doctest: +SKIP float64 >>> print(df['int'].dtype) # doctest: +SKIP int64
To preserve dtypes while iterating over the rows, it is better to use
itertuples()
which returns namedtuples of the values and which is generally faster thaniterrows
.You should never modify something you are iterating over. This is not guaranteed to work in all cases. Depending on the data types, the iterator returns a copy and not a view, and writing to it will have no effect.
-
itertuples
(index=True, name='Pandas')¶ Iterate over DataFrame rows as namedtuples.
This docstring was copied from pandas.core.frame.DataFrame.itertuples.
Some inconsistencies with the Dask version may exist.
Parameters: index : bool, default True
If True, return the index as the first element of the tuple.
name : str or None, default “Pandas”
The name of the returned namedtuples or None to return regular tuples.
Returns: iterator
An object to iterate over namedtuples for each row in the DataFrame with the first field possibly being the index and following fields being the column values.
See also
DataFrame.iterrows
- Iterate over DataFrame rows as (index, Series) pairs.
DataFrame.items
- Iterate over (column name, Series) pairs.
Notes
The column names will be renamed to positional names if they are invalid Python identifiers, repeated, or start with an underscore. With a large number of columns (>255), regular tuples are returned.
Examples
>>> df = pd.DataFrame({'num_legs': [4, 2], 'num_wings': [0, 2]}, # doctest: +SKIP ... index=['dog', 'hawk']) >>> df # doctest: +SKIP num_legs num_wings dog 4 0 hawk 2 2 >>> for row in df.itertuples(): # doctest: +SKIP ... print(row) ... Pandas(Index='dog', num_legs=4, num_wings=0) Pandas(Index='hawk', num_legs=2, num_wings=2)
By setting the index parameter to False we can remove the index as the first element of the tuple:
>>> for row in df.itertuples(index=False): # doctest: +SKIP ... print(row) ... Pandas(num_legs=4, num_wings=0) Pandas(num_legs=2, num_wings=2)
With the name parameter set we set a custom name for the yielded namedtuples:
>>> for row in df.itertuples(name='Animal'): # doctest: +SKIP ... print(row) ... Animal(Index='dog', num_legs=4, num_wings=0) Animal(Index='hawk', num_legs=2, num_wings=2)
-
join
(other, on=None, how='left', lsuffix='', rsuffix='', npartitions=None, shuffle=None)¶ Join columns of another DataFrame.
This docstring was copied from pandas.core.frame.DataFrame.join.
Some inconsistencies with the Dask version may exist.
Join columns with other DataFrame either on index or on a key column. Efficiently join multiple DataFrame objects by index at once by passing a list.
Parameters: other : DataFrame, Series, or list of DataFrame
Index should be similar to one of the columns in this one. If a Series is passed, its name attribute must be set, and that will be used as the column name in the resulting joined DataFrame.
on : str, list of str, or array-like, optional
Column or index level name(s) in the caller to join on the index in other, otherwise joins index-on-index. If multiple values given, the other DataFrame must have a MultiIndex. Can pass an array as the join key if it is not already contained in the calling DataFrame. Like an Excel VLOOKUP operation.
how : {‘left’, ‘right’, ‘outer’, ‘inner’}, default ‘left’
How to handle the operation of the two objects.
- left: use calling frame’s index (or column if on is specified)
- right: use other’s index.
- outer: form union of calling frame’s index (or column if on is specified) with other’s index, and sort it. lexicographically.
- inner: form intersection of calling frame’s index (or column if on is specified) with other’s index, preserving the order of the calling’s one.
lsuffix : str, default ‘’
Suffix to use from left frame’s overlapping columns.
rsuffix : str, default ‘’
Suffix to use from right frame’s overlapping columns.
sort : bool, default False (Not supported in Dask)
Order result DataFrame lexicographically by the join key. If False, the order of the join key depends on the join type (how keyword).
Returns: DataFrame
A dataframe containing columns from both the caller and other.
See also
DataFrame.merge
- For column(s)-on-columns(s) operations.
Notes
Parameters on, lsuffix, and rsuffix are not supported when passing a list of DataFrame objects.
Support for specifying index levels as the on parameter was added in version 0.23.0.
Examples
>>> df = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3', 'K4', 'K5'], # doctest: +SKIP ... 'A': ['A0', 'A1', 'A2', 'A3', 'A4', 'A5']})
>>> df # doctest: +SKIP key A 0 K0 A0 1 K1 A1 2 K2 A2 3 K3 A3 4 K4 A4 5 K5 A5
>>> other = pd.DataFrame({'key': ['K0', 'K1', 'K2'], # doctest: +SKIP ... 'B': ['B0', 'B1', 'B2']})
>>> other # doctest: +SKIP key B 0 K0 B0 1 K1 B1 2 K2 B2
Join DataFrames using their indexes.
>>> df.join(other, lsuffix='_caller', rsuffix='_other') # doctest: +SKIP key_caller A key_other B 0 K0 A0 K0 B0 1 K1 A1 K1 B1 2 K2 A2 K2 B2 3 K3 A3 NaN NaN 4 K4 A4 NaN NaN 5 K5 A5 NaN NaN
If we want to join using the key columns, we need to set key to be the index in both df and other. The joined DataFrame will have key as its index.
>>> df.set_index('key').join(other.set_index('key')) # doctest: +SKIP A B key K0 A0 B0 K1 A1 B1 K2 A2 B2 K3 A3 NaN K4 A4 NaN K5 A5 NaN
Another option to join using the key columns is to use the on parameter. DataFrame.join always uses other’s index but we can use any column in df. This method preserves the original DataFrame’s index in the result.
>>> df.join(other.set_index('key'), on='key') # doctest: +SKIP key A B 0 K0 A0 B0 1 K1 A1 B1 2 K2 A2 B2 3 K3 A3 NaN 4 K4 A4 NaN 5 K5 A5 NaN
-
known_divisions
¶ Whether divisions are already known
-
last
(offset)¶ Convenience method for subsetting final periods of time series data based on a date offset.
This docstring was copied from pandas.core.frame.DataFrame.last.
Some inconsistencies with the Dask version may exist.
Parameters: offset : string, DateOffset, dateutil.relativedelta
Returns: subset : same type as caller
Raises: TypeError
If the index is not a
DatetimeIndex
See also
first
- Select initial periods of time series based on a date offset.
at_time
- Select values at a particular time of the day.
between_time
- Select values between particular times of the day.
Examples
>>> i = pd.date_range('2018-04-09', periods=4, freq='2D') # doctest: +SKIP >>> ts = pd.DataFrame({'A': [1,2,3,4]}, index=i) # doctest: +SKIP >>> ts # doctest: +SKIP A 2018-04-09 1 2018-04-11 2 2018-04-13 3 2018-04-15 4
Get the rows for the last 3 days:
>>> ts.last('3D') # doctest: +SKIP A 2018-04-13 3 2018-04-15 4
Notice the data for 3 last calender days were returned, not the last 3 observed days in the dataset, and therefore data for 2018-04-11 was not returned.
-
le
(other, axis='columns', level=None)¶ Get Less than or equal to of dataframe and other, element-wise (binary operator le).
This docstring was copied from pandas.core.frame.DataFrame.le.
Some inconsistencies with the Dask version may exist.
Among flexible wrappers (eq, ne, le, lt, ge, gt) to comparison operators.
Equivalent to ==, =!, <=, <, >=, > with support to choose axis (rows or columns) and level for comparison.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}, default ‘columns’
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’).
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: DataFrame of bool
Result of the comparison.
See also
DataFrame.eq
- Compare DataFrames for equality elementwise.
DataFrame.ne
- Compare DataFrames for inequality elementwise.
DataFrame.le
- Compare DataFrames for less than inequality or equality elementwise.
DataFrame.lt
- Compare DataFrames for strictly less than inequality elementwise.
DataFrame.ge
- Compare DataFrames for greater than inequality or equality elementwise.
DataFrame.gt
- Compare DataFrames for strictly greater than inequality elementwise.
Notes
Mismatched indices will be unioned together. NaN values are considered different (i.e. NaN != NaN).
Examples
>>> df = pd.DataFrame({'cost': [250, 150, 100], # doctest: +SKIP ... 'revenue': [100, 250, 300]}, ... index=['A', 'B', 'C']) >>> df # doctest: +SKIP cost revenue A 250 100 B 150 250 C 100 300
Comparison with a scalar, using either the operator or method:
>>> df == 100 # doctest: +SKIP cost revenue A False True B False False C True False
>>> df.eq(100) # doctest: +SKIP cost revenue A False True B False False C True False
When other is a
Series
, the columns of a DataFrame are aligned with the index of other and broadcast:>>> df != pd.Series([100, 250], index=["cost", "revenue"]) # doctest: +SKIP cost revenue A True True B True False C False True
Use the method to control the broadcast axis:
>>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index') # doctest: +SKIP cost revenue A True False B True True C True True D True True
When comparing to an arbitrary sequence, the number of columns must match the number elements in other:
>>> df == [250, 100] # doctest: +SKIP cost revenue A True True B False False C False False
Use the method to control the axis:
>>> df.eq([250, 250, 100], axis='index') # doctest: +SKIP cost revenue A True False B False True C True False
Compare to a DataFrame of different shape.
>>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]}, # doctest: +SKIP ... index=['A', 'B', 'C', 'D']) >>> other # doctest: +SKIP revenue A 300 B 250 C 100 D 150
>>> df.gt(other) # doctest: +SKIP cost revenue A False False B False False C False True D False False
Compare to a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220], # doctest: +SKIP ... 'revenue': [100, 250, 300, 200, 175, 225]}, ... index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'], ... ['A', 'B', 'C', 'A', 'B', 'C']]) >>> df_multindex # doctest: +SKIP cost revenue Q1 A 250 100 B 150 250 C 100 300 Q2 A 150 200 B 300 175 C 220 225
>>> df.le(df_multindex, level=1) # doctest: +SKIP cost revenue Q1 A True True B True True C True True Q2 A False True B True False C True False
-
loc
¶ Purely label-location based indexer for selection by label.
>>> df.loc["b"] # doctest: +SKIP >>> df.loc["b":"d"] # doctest: +SKIP
-
lt
(other, axis='columns', level=None)¶ Get Less than of dataframe and other, element-wise (binary operator lt).
This docstring was copied from pandas.core.frame.DataFrame.lt.
Some inconsistencies with the Dask version may exist.
Among flexible wrappers (eq, ne, le, lt, ge, gt) to comparison operators.
Equivalent to ==, =!, <=, <, >=, > with support to choose axis (rows or columns) and level for comparison.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}, default ‘columns’
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’).
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: DataFrame of bool
Result of the comparison.
See also
DataFrame.eq
- Compare DataFrames for equality elementwise.
DataFrame.ne
- Compare DataFrames for inequality elementwise.
DataFrame.le
- Compare DataFrames for less than inequality or equality elementwise.
DataFrame.lt
- Compare DataFrames for strictly less than inequality elementwise.
DataFrame.ge
- Compare DataFrames for greater than inequality or equality elementwise.
DataFrame.gt
- Compare DataFrames for strictly greater than inequality elementwise.
Notes
Mismatched indices will be unioned together. NaN values are considered different (i.e. NaN != NaN).
Examples
>>> df = pd.DataFrame({'cost': [250, 150, 100], # doctest: +SKIP ... 'revenue': [100, 250, 300]}, ... index=['A', 'B', 'C']) >>> df # doctest: +SKIP cost revenue A 250 100 B 150 250 C 100 300
Comparison with a scalar, using either the operator or method:
>>> df == 100 # doctest: +SKIP cost revenue A False True B False False C True False
>>> df.eq(100) # doctest: +SKIP cost revenue A False True B False False C True False
When other is a
Series
, the columns of a DataFrame are aligned with the index of other and broadcast:>>> df != pd.Series([100, 250], index=["cost", "revenue"]) # doctest: +SKIP cost revenue A True True B True False C False True
Use the method to control the broadcast axis:
>>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index') # doctest: +SKIP cost revenue A True False B True True C True True D True True
When comparing to an arbitrary sequence, the number of columns must match the number elements in other:
>>> df == [250, 100] # doctest: +SKIP cost revenue A True True B False False C False False
Use the method to control the axis:
>>> df.eq([250, 250, 100], axis='index') # doctest: +SKIP cost revenue A True False B False True C True False
Compare to a DataFrame of different shape.
>>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]}, # doctest: +SKIP ... index=['A', 'B', 'C', 'D']) >>> other # doctest: +SKIP revenue A 300 B 250 C 100 D 150
>>> df.gt(other) # doctest: +SKIP cost revenue A False False B False False C False True D False False
Compare to a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220], # doctest: +SKIP ... 'revenue': [100, 250, 300, 200, 175, 225]}, ... index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'], ... ['A', 'B', 'C', 'A', 'B', 'C']]) >>> df_multindex # doctest: +SKIP cost revenue Q1 A 250 100 B 150 250 C 100 300 Q2 A 150 200 B 300 175 C 220 225
>>> df.le(df_multindex, level=1) # doctest: +SKIP cost revenue Q1 A True True B True True C True True Q2 A False True B True False C True False
-
map_overlap
(func, before, after, *args, **kwargs)¶ Apply a function to each partition, sharing rows with adjacent partitions.
This can be useful for implementing windowing functions such as
df.rolling(...).mean()
ordf.diff()
.Parameters: func : function
Function applied to each partition.
before : int
The number of rows to prepend to partition
i
from the end of partitioni - 1
.after : int
The number of rows to append to partition
i
from the beginning of partitioni + 1
.args, kwargs :
Arguments and keywords to pass to the function. The partition will be the first argument, and these will be passed after.
meta : pd.DataFrame, pd.Series, dict, iterable, tuple, optional
An empty
pd.DataFrame
orpd.Series
that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of aDataFrame
, adict
of{name: dtype}
or iterable of(name, dtype)
can be provided (note that the order of the names should match the order of the columns). Instead of a series, a tuple of(name, dtype)
can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providingmeta
is recommended. For more information, seedask.dataframe.utils.make_meta
.Notes
Given positive integers
before
andafter
, and a functionfunc
,map_overlap
does the following:- Prepend
before
rows to each partitioni
from the end of partitioni - 1
. The first partition has no rows prepended. - Append
after
rows to each partitioni
from the beginning of partitioni + 1
. The last partition has no rows appended. - Apply
func
to each partition, passing in any extraargs
andkwargs
if provided. - Trim
before
rows from the beginning of all but the first partition. - Trim
after
rows from the end of all but the last partition.
Note that the index and divisions are assumed to remain unchanged.
Examples
Given a DataFrame, Series, or Index, such as:
>>> import dask.dataframe as dd >>> df = pd.DataFrame({'x': [1, 2, 4, 7, 11], ... 'y': [1., 2., 3., 4., 5.]}) >>> ddf = dd.from_pandas(df, npartitions=2)
A rolling sum with a trailing moving window of size 2 can be computed by overlapping 2 rows before each partition, and then mapping calls to
df.rolling(2).sum()
:>>> ddf.compute() x y 0 1 1.0 1 2 2.0 2 4 3.0 3 7 4.0 4 11 5.0 >>> ddf.map_overlap(lambda df: df.rolling(2).sum(), 2, 0).compute() x y 0 NaN NaN 1 3.0 3.0 2 6.0 5.0 3 11.0 7.0 4 18.0 9.0
The pandas
diff
method computes a discrete difference shifted by a number of periods (can be positive or negative). This can be implemented by mapping calls todf.diff
to each partition after prepending/appending that many rows, depending on sign:>>> def diff(df, periods=1): ... before, after = (periods, 0) if periods > 0 else (0, -periods) ... return df.map_overlap(lambda df, periods=1: df.diff(periods), ... periods, 0, periods=periods) >>> diff(ddf, 1).compute() x y 0 NaN NaN 1 1.0 1.0 2 2.0 1.0 3 3.0 1.0 4 4.0 1.0
If you have a
DatetimeIndex
, you can use apd.Timedelta
for time- based windows.>>> ts = pd.Series(range(10), index=pd.date_range('2017', periods=10)) >>> dts = dd.from_pandas(ts, npartitions=2) >>> dts.map_overlap(lambda df: df.rolling('2D').sum(), ... pd.Timedelta('2D'), 0).compute() 2017-01-01 0.0 2017-01-02 1.0 2017-01-03 3.0 2017-01-04 5.0 2017-01-05 7.0 2017-01-06 9.0 2017-01-07 11.0 2017-01-08 13.0 2017-01-09 15.0 2017-01-10 17.0 Freq: D, dtype: float64
- Prepend
-
map_partitions
(func, *args, **kwargs)¶ Apply Python function on each DataFrame partition.
Note that the index and divisions are assumed to remain unchanged.
Parameters: func : function
Function applied to each partition.
args, kwargs :
Arguments and keywords to pass to the function. The partition will be the first argument, and these will be passed after. Arguments and keywords may contain
Scalar
,Delayed
or regular python objects. DataFrame-like args (both dask and pandas) will be repartitioned to align (if necessary) before applying the function.meta : pd.DataFrame, pd.Series, dict, iterable, tuple, optional
An empty
pd.DataFrame
orpd.Series
that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of aDataFrame
, adict
of{name: dtype}
or iterable of(name, dtype)
can be provided (note that the order of the names should match the order of the columns). Instead of a series, a tuple of(name, dtype)
can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providingmeta
is recommended. For more information, seedask.dataframe.utils.make_meta
.Examples
Given a DataFrame, Series, or Index, such as:
>>> import dask.dataframe as dd >>> df = pd.DataFrame({'x': [1, 2, 3, 4, 5], ... 'y': [1., 2., 3., 4., 5.]}) >>> ddf = dd.from_pandas(df, npartitions=2)
One can use
map_partitions
to apply a function on each partition. Extra arguments and keywords can optionally be provided, and will be passed to the function after the partition.Here we apply a function with arguments and keywords to a DataFrame, resulting in a Series:
>>> def myadd(df, a, b=1): ... return df.x + df.y + a + b >>> res = ddf.map_partitions(myadd, 1, b=2) >>> res.dtype dtype('float64')
By default, dask tries to infer the output metadata by running your provided function on some fake data. This works well in many cases, but can sometimes be expensive, or even fail. To avoid this, you can manually specify the output metadata with the
meta
keyword. This can be specified in many forms, for more information seedask.dataframe.utils.make_meta
.Here we specify the output is a Series with no name, and dtype
float64
:>>> res = ddf.map_partitions(myadd, 1, b=2, meta=(None, 'f8'))
Here we map a function that takes in a DataFrame, and returns a DataFrame with a new column:
>>> res = ddf.map_partitions(lambda df: df.assign(z=df.x * df.y)) >>> res.dtypes x int64 y float64 z float64 dtype: object
As before, the output metadata can also be specified manually. This time we pass in a
dict
, as the output is a DataFrame:>>> res = ddf.map_partitions(lambda df: df.assign(z=df.x * df.y), ... meta={'x': 'i8', 'y': 'f8', 'z': 'f8'})
In the case where the metadata doesn’t change, you can also pass in the object itself directly:
>>> res = ddf.map_partitions(lambda df: df.head(), meta=df)
Also note that the index and divisions are assumed to remain unchanged. If the function you’re mapping changes the index/divisions, you’ll need to clear them afterwards:
>>> ddf.map_partitions(func).clear_divisions() # doctest: +SKIP
-
mask
(cond, other=nan)¶ Replace values where the condition is True.
This docstring was copied from pandas.core.frame.DataFrame.mask.
Some inconsistencies with the Dask version may exist.
Parameters: cond : boolean Series/DataFrame, array-like, or callable
Where cond is False, keep the original value. Where True, replace with corresponding value from other. If cond is callable, it is computed on the Series/DataFrame and should return boolean Series/DataFrame or array. The callable must not change input Series/DataFrame (though pandas doesn’t check it).
New in version 0.18.1: A callable can be used as cond.
other : scalar, Series/DataFrame, or callable
Entries where cond is True are replaced with corresponding value from other. If other is callable, it is computed on the Series/DataFrame and should return scalar or Series/DataFrame. The callable must not change input Series/DataFrame (though pandas doesn’t check it).
New in version 0.18.1: A callable can be used as other.
inplace : bool, default False (Not supported in Dask)
Whether to perform the operation in place on the data.
axis : int, default None (Not supported in Dask)
Alignment axis if needed.
level : int, default None (Not supported in Dask)
Alignment level if needed.
errors : str, {‘raise’, ‘ignore’}, default ‘raise’ (Not supported in Dask)
Note that currently this parameter won’t affect the results and will always coerce to a suitable dtype.
- ‘raise’ : allow exceptions to be raised.
- ‘ignore’ : suppress exceptions. On error return original object.
try_cast : bool, default False (Not supported in Dask)
Try to cast the result back to the input type (if possible).
Returns: Same type as caller
See also
DataFrame.where()
- Return an object of same shape as self.
Notes
The mask method is an application of the if-then idiom. For each element in the calling DataFrame, if
cond
isFalse
the element is used; otherwise the corresponding element from the DataFrameother
is used.The signature for
DataFrame.where()
differs fromnumpy.where()
. Roughlydf1.where(m, df2)
is equivalent tonp.where(m, df1, df2)
.For further details and examples see the
mask
documentation in indexing.Examples
>>> s = pd.Series(range(5)) # doctest: +SKIP >>> s.where(s > 0) # doctest: +SKIP 0 NaN 1 1.0 2 2.0 3 3.0 4 4.0 dtype: float64
>>> s.mask(s > 0) # doctest: +SKIP 0 0.0 1 NaN 2 NaN 3 NaN 4 NaN dtype: float64
>>> s.where(s > 1, 10) # doctest: +SKIP 0 10 1 10 2 2 3 3 4 4 dtype: int64
>>> df = pd.DataFrame(np.arange(10).reshape(-1, 2), columns=['A', 'B']) # doctest: +SKIP >>> df # doctest: +SKIP A B 0 0 1 1 2 3 2 4 5 3 6 7 4 8 9 >>> m = df % 3 == 0 # doctest: +SKIP >>> df.where(m, -df) # doctest: +SKIP A B 0 0 -1 1 -2 3 2 -4 -5 3 6 -7 4 -8 9 >>> df.where(m, -df) == np.where(m, df, -df) # doctest: +SKIP A B 0 True True 1 True True 2 True True 3 True True 4 True True >>> df.where(m, -df) == df.mask(~m, -df) # doctest: +SKIP A B 0 True True 1 True True 2 True True 3 True True 4 True True
-
max
(axis=None, skipna=True, split_every=False, out=None)¶ Return the maximum of the values for the requested axis.
This docstring was copied from pandas.core.frame.DataFrame.max.
Some inconsistencies with the Dask version may exist.
If you want the index of the maximum, use
idxmax
. This is the equivalent of thenumpy.ndarray
methodargmax
.Parameters: axis : {index (0), columns (1)}
Axis for the function to be applied on.
skipna : bool, default True
Exclude NA/null values when computing the result.
level : int or level name, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.
numeric_only : bool, default None (Not supported in Dask)
Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.
**kwargs
Additional keyword arguments to be passed to the function.
Returns: Series or DataFrame (if level specified)
See also
Series.sum
- Return the sum.
Series.min
- Return the minimum.
Series.max
- Return the maximum.
Series.idxmin
- Return the index of the minimum.
Series.idxmax
- Return the index of the maximum.
DataFrame.sum
- Return the sum over the requested axis.
DataFrame.min
- Return the minimum over the requested axis.
DataFrame.max
- Return the maximum over the requested axis.
DataFrame.idxmin
- Return the index of the minimum over the requested axis.
DataFrame.idxmax
- Return the index of the maximum over the requested axis.
Examples
>>> idx = pd.MultiIndex.from_arrays([ # doctest: +SKIP ... ['warm', 'warm', 'cold', 'cold'], ... ['dog', 'falcon', 'fish', 'spider']], ... names=['blooded', 'animal']) >>> s = pd.Series([4, 2, 0, 8], name='legs', index=idx) # doctest: +SKIP >>> s # doctest: +SKIP blooded animal warm dog 4 falcon 2 cold fish 0 spider 8 Name: legs, dtype: int64
>>> s.max() # doctest: +SKIP 8
Max using level names, as well as indices.
>>> s.max(level='blooded') # doctest: +SKIP blooded warm 4 cold 8 Name: legs, dtype: int64
>>> s.max(level=0) # doctest: +SKIP blooded warm 4 cold 8 Name: legs, dtype: int64
-
mean
(axis=None, skipna=True, split_every=False, dtype=None, out=None)¶ Return the mean of the values for the requested axis.
This docstring was copied from pandas.core.frame.DataFrame.mean.
Some inconsistencies with the Dask version may exist.
Parameters: axis : {index (0), columns (1)}
Axis for the function to be applied on.
skipna : bool, default True
Exclude NA/null values when computing the result.
level : int or level name, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.
numeric_only : bool, default None (Not supported in Dask)
Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.
**kwargs
Additional keyword arguments to be passed to the function.
Returns: Series or DataFrame (if level specified)
-
melt
(id_vars=None, value_vars=None, var_name=None, value_name='value', col_level=None)¶ Unpivots a DataFrame from wide format to long format, optionally leaving identifier variables set.
This function is useful to massage a DataFrame into a format where one or more columns are identifier variables (
id_vars
), while all other columns, considered measured variables (value_vars
), are “unpivoted” to the row axis, leaving just two non-identifier columns, ‘variable’ and ‘value’.Parameters: frame : DataFrame
id_vars : tuple, list, or ndarray, optional
Column(s) to use as identifier variables.
value_vars : tuple, list, or ndarray, optional
Column(s) to unpivot. If not specified, uses all columns that are not set as id_vars.
var_name : scalar
Name to use for the ‘variable’ column. If None it uses
frame.columns.name
or ‘variable’.value_name : scalar, default ‘value’
Name to use for the ‘value’ column.
col_level : int or string, optional
If columns are a MultiIndex then use this level to melt.
Returns: DataFrame
Unpivoted DataFrame.
See also
-
memory_usage
(index=True, deep=False)¶ Return the memory usage of each column in bytes.
This docstring was copied from pandas.core.frame.DataFrame.memory_usage.
Some inconsistencies with the Dask version may exist.
The memory usage can optionally include the contribution of the index and elements of object dtype.
This value is displayed in DataFrame.info by default. This can be suppressed by setting
pandas.options.display.memory_usage
to False.Parameters: index : bool, default True
Specifies whether to include the memory usage of the DataFrame’s index in returned Series. If
index=True
, the memory usage of the index is the first item in the output.deep : bool, default False
If True, introspect the data deeply by interrogating object dtypes for system-level memory consumption, and include it in the returned values.
Returns: Series
A Series whose index is the original column names and whose values is the memory usage of each column in bytes.
See also
numpy.ndarray.nbytes
- Total bytes consumed by the elements of an ndarray.
Series.memory_usage
- Bytes consumed by a Series.
Categorical
- Memory-efficient array for string values with many repeated values.
DataFrame.info
- Concise summary of a DataFrame.
Examples
>>> dtypes = ['int64', 'float64', 'complex128', 'object', 'bool'] # doctest: +SKIP >>> data = dict([(t, np.ones(shape=5000).astype(t)) # doctest: +SKIP ... for t in dtypes]) >>> df = pd.DataFrame(data) # doctest: +SKIP >>> df.head() # doctest: +SKIP int64 float64 complex128 object bool 0 1 1.0 1.000000+0.000000j 1 True 1 1 1.0 1.000000+0.000000j 1 True 2 1 1.0 1.000000+0.000000j 1 True 3 1 1.0 1.000000+0.000000j 1 True 4 1 1.0 1.000000+0.000000j 1 True
>>> df.memory_usage() # doctest: +SKIP Index 128 int64 40000 float64 40000 complex128 80000 object 40000 bool 5000 dtype: int64
>>> df.memory_usage(index=False) # doctest: +SKIP int64 40000 float64 40000 complex128 80000 object 40000 bool 5000 dtype: int64
The memory footprint of object dtype columns is ignored by default:
>>> df.memory_usage(deep=True) # doctest: +SKIP Index 128 int64 40000 float64 40000 complex128 80000 object 160000 bool 5000 dtype: int64
Use a Categorical for efficient storage of an object-dtype column with many repeated values.
>>> df['object'].astype('category').memory_usage(deep=True) # doctest: +SKIP 5216
-
merge
(right, how='inner', on=None, left_on=None, right_on=None, left_index=False, right_index=False, suffixes=('_x', '_y'), indicator=False, npartitions=None, shuffle=None)¶ Merge the DataFrame with another DataFrame
This will merge the two datasets, either on the indices, a certain column in each dataset or the index in one dataset and the column in another.
Parameters: right: dask.dataframe.DataFrame
how : {‘left’, ‘right’, ‘outer’, ‘inner’}, default: ‘inner’
How to handle the operation of the two objects:
- left: use calling frame’s index (or column if on is specified)
- right: use other frame’s index
- outer: form union of calling frame’s index (or column if on is specified) with other frame’s index, and sort it lexicographically
- inner: form intersection of calling frame’s index (or column if on is specified) with other frame’s index, preserving the order of the calling’s one
on : label or list
Column or index level names to join on. These must be found in both DataFrames. If on is None and not merging on indexes then this defaults to the intersection of the columns in both DataFrames.
left_on : label or list, or array-like
Column to join on in the left DataFrame. Other than in pandas arrays and lists are only support if their length is 1.
right_on : label or list, or array-like
Column to join on in the right DataFrame. Other than in pandas arrays and lists are only support if their length is 1.
left_index : boolean, default False
Use the index from the left DataFrame as the join key.
right_index : boolean, default False
Use the index from the right DataFrame as the join key.
suffixes : 2-length sequence (tuple, list, …)
Suffix to apply to overlapping column names in the left and right side, respectively
indicator : boolean or string, default False
If True, adds a column to output DataFrame called “_merge” with information on the source of each row. If string, column with information on source of each row will be added to output DataFrame, and column will be named value of string. Information column is Categorical-type and takes on a value of “left_only” for observations whose merge key only appears in left DataFrame, “right_only” for observations whose merge key only appears in right DataFrame, and “both” if the observation’s merge key is found in both.
npartitions: int or None, optional
The ideal number of output partitions. This is only utilised when performing a hash_join (merging on columns only). If
None
thennpartitions = max(lhs.npartitions, rhs.npartitions)
. Default isNone
.shuffle: {‘disk’, ‘tasks’}, optional
Either
'disk'
for single-node operation or'tasks'
for distributed operation. Will be inferred by your current scheduler.Notes
There are three ways to join dataframes:
- Joining on indices. In this case the divisions are
aligned using the function
dask.dataframe.multi.align_partitions
. Afterwards, each partition is merged with the pandas merge function. - Joining one on index and one on column. In this case the divisions of
dataframe merged by index (\(d_i\)) are used to divide the column
merged dataframe (\(d_c\)) one using
dask.dataframe.multi.rearrange_by_divisions
. In this case the merged dataframe (\(d_m\)) has the exact same divisions as (\(d_i\)). This can lead to issues if you merge multiple rows from (\(d_c\)) to one row in (\(d_i\)). - Joining both on columns. In this case a hash join is performed using
dask.dataframe.multi.hash_join
.
-
min
(axis=None, skipna=True, split_every=False, out=None)¶ Return the minimum of the values for the requested axis.
This docstring was copied from pandas.core.frame.DataFrame.min.
Some inconsistencies with the Dask version may exist.
If you want the index of the minimum, use
idxmin
. This is the equivalent of thenumpy.ndarray
methodargmin
.Parameters: axis : {index (0), columns (1)}
Axis for the function to be applied on.
skipna : bool, default True
Exclude NA/null values when computing the result.
level : int or level name, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.
numeric_only : bool, default None (Not supported in Dask)
Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.
**kwargs
Additional keyword arguments to be passed to the function.
Returns: Series or DataFrame (if level specified)
See also
Series.sum
- Return the sum.
Series.min
- Return the minimum.
Series.max
- Return the maximum.
Series.idxmin
- Return the index of the minimum.
Series.idxmax
- Return the index of the maximum.
DataFrame.sum
- Return the sum over the requested axis.
DataFrame.min
- Return the minimum over the requested axis.
DataFrame.max
- Return the maximum over the requested axis.
DataFrame.idxmin
- Return the index of the minimum over the requested axis.
DataFrame.idxmax
- Return the index of the maximum over the requested axis.
Examples
>>> idx = pd.MultiIndex.from_arrays([ # doctest: +SKIP ... ['warm', 'warm', 'cold', 'cold'], ... ['dog', 'falcon', 'fish', 'spider']], ... names=['blooded', 'animal']) >>> s = pd.Series([4, 2, 0, 8], name='legs', index=idx) # doctest: +SKIP >>> s # doctest: +SKIP blooded animal warm dog 4 falcon 2 cold fish 0 spider 8 Name: legs, dtype: int64
>>> s.min() # doctest: +SKIP 0
Min using level names, as well as indices.
>>> s.min(level='blooded') # doctest: +SKIP blooded warm 2 cold 0 Name: legs, dtype: int64
>>> s.min(level=0) # doctest: +SKIP blooded warm 2 cold 0 Name: legs, dtype: int64
-
mod
(other, axis='columns', level=None, fill_value=None)¶ Get Modulo of dataframe and other, element-wise (binary operator mod).
This docstring was copied from pandas.core.frame.DataFrame.mod.
Some inconsistencies with the Dask version may exist.
Equivalent to
dataframe % other
, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, rmod.Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
fill_value : float or None, default None
Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.
Returns: DataFrame
Result of the arithmetic operation.
See also
DataFrame.add
- Add DataFrames.
DataFrame.sub
- Subtract DataFrames.
DataFrame.mul
- Multiply DataFrames.
DataFrame.div
- Divide DataFrames (float division).
DataFrame.truediv
- Divide DataFrames (float division).
DataFrame.floordiv
- Divide DataFrames (integer division).
DataFrame.mod
- Calculate modulo (remainder after division).
DataFrame.pow
- Calculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4], # doctest: +SKIP ... 'degrees': [360, 180, 360]}, ... index=['circle', 'triangle', 'rectangle']) >>> df # doctest: +SKIP angles degrees circle 0 360 triangle 3 180 rectangle 4 360
Add a scalar with operator version which return the same results.
>>> df + 1 # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
>>> df.add(1) # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10) # doctest: +SKIP angles degrees circle 0.0 36.0 triangle 0.3 18.0 rectangle 0.4 36.0
>>> df.rdiv(10) # doctest: +SKIP angles degrees circle inf 0.027778 triangle 3.333333 0.055556 rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2] # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub([1, 2], axis='columns') # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']), # doctest: +SKIP ... axis='index') angles degrees circle -1 359 triangle 2 179 rectangle 3 359
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]}, # doctest: +SKIP ... index=['circle', 'triangle', 'rectangle']) >>> other # doctest: +SKIP angles circle 0 triangle 3 rectangle 4
>>> df * other # doctest: +SKIP angles degrees circle 0 NaN triangle 9 NaN rectangle 16 NaN
>>> df.mul(other, fill_value=0) # doctest: +SKIP angles degrees circle 0 0.0 triangle 9 0.0 rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6], # doctest: +SKIP ... 'degrees': [360, 180, 360, 360, 540, 720]}, ... index=[['A', 'A', 'A', 'B', 'B', 'B'], ... ['circle', 'triangle', 'rectangle', ... 'square', 'pentagon', 'hexagon']]) >>> df_multindex # doctest: +SKIP angles degrees A circle 0 360 triangle 3 180 rectangle 4 360 B square 4 360 pentagon 5 540 hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0) # doctest: +SKIP angles degrees A circle NaN 1.0 triangle 1.0 1.0 rectangle 1.0 1.0 B square 0.0 0.0 pentagon 0.0 0.0 hexagon 0.0 0.0
-
mul
(other, axis='columns', level=None, fill_value=None)¶ Get Multiplication of dataframe and other, element-wise (binary operator mul).
This docstring was copied from pandas.core.frame.DataFrame.mul.
Some inconsistencies with the Dask version may exist.
Equivalent to
dataframe * other
, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, rmul.Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
fill_value : float or None, default None
Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.
Returns: DataFrame
Result of the arithmetic operation.
See also
DataFrame.add
- Add DataFrames.
DataFrame.sub
- Subtract DataFrames.
DataFrame.mul
- Multiply DataFrames.
DataFrame.div
- Divide DataFrames (float division).
DataFrame.truediv
- Divide DataFrames (float division).
DataFrame.floordiv
- Divide DataFrames (integer division).
DataFrame.mod
- Calculate modulo (remainder after division).
DataFrame.pow
- Calculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4], # doctest: +SKIP ... 'degrees': [360, 180, 360]}, ... index=['circle', 'triangle', 'rectangle']) >>> df # doctest: +SKIP angles degrees circle 0 360 triangle 3 180 rectangle 4 360
Add a scalar with operator version which return the same results.
>>> df + 1 # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
>>> df.add(1) # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10) # doctest: +SKIP angles degrees circle 0.0 36.0 triangle 0.3 18.0 rectangle 0.4 36.0
>>> df.rdiv(10) # doctest: +SKIP angles degrees circle inf 0.027778 triangle 3.333333 0.055556 rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2] # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub([1, 2], axis='columns') # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']), # doctest: +SKIP ... axis='index') angles degrees circle -1 359 triangle 2 179 rectangle 3 359
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]}, # doctest: +SKIP ... index=['circle', 'triangle', 'rectangle']) >>> other # doctest: +SKIP angles circle 0 triangle 3 rectangle 4
>>> df * other # doctest: +SKIP angles degrees circle 0 NaN triangle 9 NaN rectangle 16 NaN
>>> df.mul(other, fill_value=0) # doctest: +SKIP angles degrees circle 0 0.0 triangle 9 0.0 rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6], # doctest: +SKIP ... 'degrees': [360, 180, 360, 360, 540, 720]}, ... index=[['A', 'A', 'A', 'B', 'B', 'B'], ... ['circle', 'triangle', 'rectangle', ... 'square', 'pentagon', 'hexagon']]) >>> df_multindex # doctest: +SKIP angles degrees A circle 0 360 triangle 3 180 rectangle 4 360 B square 4 360 pentagon 5 540 hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0) # doctest: +SKIP angles degrees A circle NaN 1.0 triangle 1.0 1.0 rectangle 1.0 1.0 B square 0.0 0.0 pentagon 0.0 0.0 hexagon 0.0 0.0
-
ndim
¶ Return dimensionality
-
ne
(other, axis='columns', level=None)¶ Get Not equal to of dataframe and other, element-wise (binary operator ne).
This docstring was copied from pandas.core.frame.DataFrame.ne.
Some inconsistencies with the Dask version may exist.
Among flexible wrappers (eq, ne, le, lt, ge, gt) to comparison operators.
Equivalent to ==, =!, <=, <, >=, > with support to choose axis (rows or columns) and level for comparison.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}, default ‘columns’
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’).
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: DataFrame of bool
Result of the comparison.
See also
DataFrame.eq
- Compare DataFrames for equality elementwise.
DataFrame.ne
- Compare DataFrames for inequality elementwise.
DataFrame.le
- Compare DataFrames for less than inequality or equality elementwise.
DataFrame.lt
- Compare DataFrames for strictly less than inequality elementwise.
DataFrame.ge
- Compare DataFrames for greater than inequality or equality elementwise.
DataFrame.gt
- Compare DataFrames for strictly greater than inequality elementwise.
Notes
Mismatched indices will be unioned together. NaN values are considered different (i.e. NaN != NaN).
Examples
>>> df = pd.DataFrame({'cost': [250, 150, 100], # doctest: +SKIP ... 'revenue': [100, 250, 300]}, ... index=['A', 'B', 'C']) >>> df # doctest: +SKIP cost revenue A 250 100 B 150 250 C 100 300
Comparison with a scalar, using either the operator or method:
>>> df == 100 # doctest: +SKIP cost revenue A False True B False False C True False
>>> df.eq(100) # doctest: +SKIP cost revenue A False True B False False C True False
When other is a
Series
, the columns of a DataFrame are aligned with the index of other and broadcast:>>> df != pd.Series([100, 250], index=["cost", "revenue"]) # doctest: +SKIP cost revenue A True True B True False C False True
Use the method to control the broadcast axis:
>>> df.ne(pd.Series([100, 300], index=["A", "D"]), axis='index') # doctest: +SKIP cost revenue A True False B True True C True True D True True
When comparing to an arbitrary sequence, the number of columns must match the number elements in other:
>>> df == [250, 100] # doctest: +SKIP cost revenue A True True B False False C False False
Use the method to control the axis:
>>> df.eq([250, 250, 100], axis='index') # doctest: +SKIP cost revenue A True False B False True C True False
Compare to a DataFrame of different shape.
>>> other = pd.DataFrame({'revenue': [300, 250, 100, 150]}, # doctest: +SKIP ... index=['A', 'B', 'C', 'D']) >>> other # doctest: +SKIP revenue A 300 B 250 C 100 D 150
>>> df.gt(other) # doctest: +SKIP cost revenue A False False B False False C False True D False False
Compare to a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'cost': [250, 150, 100, 150, 300, 220], # doctest: +SKIP ... 'revenue': [100, 250, 300, 200, 175, 225]}, ... index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'], ... ['A', 'B', 'C', 'A', 'B', 'C']]) >>> df_multindex # doctest: +SKIP cost revenue Q1 A 250 100 B 150 250 C 100 300 Q2 A 150 200 B 300 175 C 220 225
>>> df.le(df_multindex, level=1) # doctest: +SKIP cost revenue Q1 A True True B True True C True True Q2 A False True B True False C True False
-
nlargest
(n=5, columns=None, split_every=None)¶ Return the first n rows ordered by columns in descending order.
This docstring was copied from pandas.core.frame.DataFrame.nlargest.
Some inconsistencies with the Dask version may exist.
Return the first n rows with the largest values in columns, in descending order. The columns that are not specified are returned as well, but not used for ordering.
This method is equivalent to
df.sort_values(columns, ascending=False).head(n)
, but more performant.Parameters: n : int
Number of rows to return.
columns : label or list of labels
Column label(s) to order by.
keep : {‘first’, ‘last’, ‘all’}, default ‘first’ (Not supported in Dask)
Where there are duplicate values:
- first : prioritize the first occurrence(s)
- last : prioritize the last occurrence(s)
all
: do not drop any duplicates, even it means- selecting more than n items.
New in version 0.24.0.
Returns: DataFrame
The first n rows ordered by the given columns in descending order.
See also
DataFrame.nsmallest
- Return the first n rows ordered by columns in ascending order.
DataFrame.sort_values
- Sort DataFrame by the values.
DataFrame.head
- Return the first n rows without re-ordering.
Notes
This function cannot be used with all column types. For example, when specifying columns with object or category dtypes,
TypeError
is raised.Examples
>>> df = pd.DataFrame({'population': [59000000, 65000000, 434000, # doctest: +SKIP ... 434000, 434000, 337000, 11300, ... 11300, 11300], ... 'GDP': [1937894, 2583560 , 12011, 4520, 12128, ... 17036, 182, 38, 311], ... 'alpha-2': ["IT", "FR", "MT", "MV", "BN", ... "IS", "NR", "TV", "AI"]}, ... index=["Italy", "France", "Malta", ... "Maldives", "Brunei", "Iceland", ... "Nauru", "Tuvalu", "Anguilla"]) >>> df # doctest: +SKIP population GDP alpha-2 Italy 59000000 1937894 IT France 65000000 2583560 FR Malta 434000 12011 MT Maldives 434000 4520 MV Brunei 434000 12128 BN Iceland 337000 17036 IS Nauru 11300 182 NR Tuvalu 11300 38 TV Anguilla 11300 311 AI
In the following example, we will use
nlargest
to select the three rows having the largest values in column “population”.>>> df.nlargest(3, 'population') # doctest: +SKIP population GDP alpha-2 France 65000000 2583560 FR Italy 59000000 1937894 IT Malta 434000 12011 MT
When using
keep='last'
, ties are resolved in reverse order:>>> df.nlargest(3, 'population', keep='last') # doctest: +SKIP population GDP alpha-2 France 65000000 2583560 FR Italy 59000000 1937894 IT Brunei 434000 12128 BN
When using
keep='all'
, all duplicate items are maintained:>>> df.nlargest(3, 'population', keep='all') # doctest: +SKIP population GDP alpha-2 France 65000000 2583560 FR Italy 59000000 1937894 IT Malta 434000 12011 MT Maldives 434000 4520 MV Brunei 434000 12128 BN
To order by the largest values in column “population” and then “GDP”, we can specify multiple columns like in the next example.
>>> df.nlargest(3, ['population', 'GDP']) # doctest: +SKIP population GDP alpha-2 France 65000000 2583560 FR Italy 59000000 1937894 IT Brunei 434000 12128 BN
-
notnull
()¶ Detect existing (non-missing) values.
This docstring was copied from pandas.core.frame.DataFrame.notnull.
Some inconsistencies with the Dask version may exist.
Return a boolean same-sized object indicating if the values are not NA. Non-missing values get mapped to True. Characters such as empty strings
''
ornumpy.inf
are not considered NA values (unless you setpandas.options.mode.use_inf_as_na = True
). NA values, such as None ornumpy.NaN
, get mapped to False values.Returns: DataFrame
Mask of bool values for each element in DataFrame that indicates whether an element is not an NA value.
See also
DataFrame.notnull
- Alias of notna.
DataFrame.isna
- Boolean inverse of notna.
DataFrame.dropna
- Omit axes labels with missing values.
notna
- Top-level notna.
Examples
Show which entries in a DataFrame are not NA.
>>> df = pd.DataFrame({'age': [5, 6, np.NaN], # doctest: +SKIP ... 'born': [pd.NaT, pd.Timestamp('1939-05-27'), ... pd.Timestamp('1940-04-25')], ... 'name': ['Alfred', 'Batman', ''], ... 'toy': [None, 'Batmobile', 'Joker']}) >>> df # doctest: +SKIP age born name toy 0 5.0 NaT Alfred None 1 6.0 1939-05-27 Batman Batmobile 2 NaN 1940-04-25 Joker
>>> df.notna() # doctest: +SKIP age born name toy 0 True False True False 1 True True True True 2 False True True True
Show which entries in a Series are not NA.
>>> ser = pd.Series([5, 6, np.NaN]) # doctest: +SKIP >>> ser # doctest: +SKIP 0 5.0 1 6.0 2 NaN dtype: float64
>>> ser.notna() # doctest: +SKIP 0 True 1 True 2 False dtype: bool
-
npartitions
¶ Return number of partitions
-
nsmallest
(n=5, columns=None, split_every=None)¶ Return the first n rows ordered by columns in ascending order.
This docstring was copied from pandas.core.frame.DataFrame.nsmallest.
Some inconsistencies with the Dask version may exist.
Return the first n rows with the smallest values in columns, in ascending order. The columns that are not specified are returned as well, but not used for ordering.
This method is equivalent to
df.sort_values(columns, ascending=True).head(n)
, but more performant.Parameters: n : int
Number of items to retrieve.
columns : list or str
Column name or names to order by.
keep : {‘first’, ‘last’, ‘all’}, default ‘first’ (Not supported in Dask)
Where there are duplicate values:
first
: take the first occurrence.last
: take the last occurrence.all
: do not drop any duplicates, even it means selecting more than n items.
New in version 0.24.0.
Returns: DataFrame
See also
DataFrame.nlargest
- Return the first n rows ordered by columns in descending order.
DataFrame.sort_values
- Sort DataFrame by the values.
DataFrame.head
- Return the first n rows without re-ordering.
Examples
>>> df = pd.DataFrame({'population': [59000000, 65000000, 434000, # doctest: +SKIP ... 434000, 434000, 337000, 11300, ... 11300, 11300], ... 'GDP': [1937894, 2583560 , 12011, 4520, 12128, ... 17036, 182, 38, 311], ... 'alpha-2': ["IT", "FR", "MT", "MV", "BN", ... "IS", "NR", "TV", "AI"]}, ... index=["Italy", "France", "Malta", ... "Maldives", "Brunei", "Iceland", ... "Nauru", "Tuvalu", "Anguilla"]) >>> df # doctest: +SKIP population GDP alpha-2 Italy 59000000 1937894 IT France 65000000 2583560 FR Malta 434000 12011 MT Maldives 434000 4520 MV Brunei 434000 12128 BN Iceland 337000 17036 IS Nauru 11300 182 NR Tuvalu 11300 38 TV Anguilla 11300 311 AI
In the following example, we will use
nsmallest
to select the three rows having the smallest values in column “a”.>>> df.nsmallest(3, 'population') # doctest: +SKIP population GDP alpha-2 Nauru 11300 182 NR Tuvalu 11300 38 TV Anguilla 11300 311 AI
When using
keep='last'
, ties are resolved in reverse order:>>> df.nsmallest(3, 'population', keep='last') # doctest: +SKIP population GDP alpha-2 Anguilla 11300 311 AI Tuvalu 11300 38 TV Nauru 11300 182 NR
When using
keep='all'
, all duplicate items are maintained:>>> df.nsmallest(3, 'population', keep='all') # doctest: +SKIP population GDP alpha-2 Nauru 11300 182 NR Tuvalu 11300 38 TV Anguilla 11300 311 AI
To order by the largest values in column “a” and then “c”, we can specify multiple columns like in the next example.
>>> df.nsmallest(3, ['population', 'GDP']) # doctest: +SKIP population GDP alpha-2 Tuvalu 11300 38 TV Nauru 11300 182 NR Anguilla 11300 311 AI
-
nunique_approx
(split_every=None)¶ Approximate number of unique rows.
This method uses the HyperLogLog algorithm for cardinality estimation to compute the approximate number of unique rows. The approximate error is 0.406%.
Parameters: split_every : int, optional
Group partitions into groups of this size while performing a tree-reduction. If set to False, no tree-reduction will be used. Default is 8.
Returns: a float representing the approximate number of elements
-
partitions
¶ Slice dataframe by partitions
This allows partitionwise slicing of a Dask Dataframe. You can perform normal Numpy-style slicing but now rather than slice elements of the array you slice along partitions so, for example,
df.partitions[:5]
produces a new Dask Dataframe of the first five partitions.Returns: A Dask DataFrame Examples
>>> df.partitions[0] # doctest: +SKIP >>> df.partitions[:3] # doctest: +SKIP >>> df.partitions[::10] # doctest: +SKIP
-
persist
(**kwargs)¶ Persist this dask collection into memory
This turns a lazy Dask collection into a Dask collection with the same metadata, but now with the results fully computed or actively computing in the background.
The action of function differs significantly depending on the active task scheduler. If the task scheduler supports asynchronous computing, such as is the case of the dask.distributed scheduler, then persist will return immediately and the return value’s task graph will contain Dask Future objects. However if the task scheduler only supports blocking computation then the call to persist will block and the return value’s task graph will contain concrete Python results.
This function is particularly useful when using distributed systems, because the results will be kept in distributed memory, rather than returned to the local process as with compute.
Parameters: scheduler : string, optional
Which scheduler to use like “threads”, “synchronous” or “processes”. If not provided, the default is to check the global settings first, and then fall back to the collection defaults.
optimize_graph : bool, optional
If True [default], the graph is optimized before computation. Otherwise the graph is run as is. This can be useful for debugging.
**kwargs
Extra keywords to forward to the scheduler function.
Returns: New dask collections backed by in-memory data
See also
dask.base.persist
-
pipe
(func, *args, **kwargs)¶ Apply func(self, *args, **kwargs).
This docstring was copied from pandas.core.frame.DataFrame.pipe.
Some inconsistencies with the Dask version may exist.
Parameters: func : function
function to apply to the Series/DataFrame.
args
, andkwargs
are passed intofunc
. Alternatively a(callable, data_keyword)
tuple wheredata_keyword
is a string indicating the keyword ofcallable
that expects the Series/DataFrame.args : iterable, optional
positional arguments passed into
func
.kwargs : mapping, optional
a dictionary of keyword arguments passed into
func
.Returns: object : the return type of
func
.See also
Notes
Use
.pipe
when chaining together functions that expect Series, DataFrames or GroupBy objects. Instead of writing>>> f(g(h(df), arg1=a), arg2=b, arg3=c) # doctest: +SKIP
You can write
>>> (df.pipe(h) # doctest: +SKIP ... .pipe(g, arg1=a) ... .pipe(f, arg2=b, arg3=c) ... )
If you have a function that takes the data as (say) the second argument, pass a tuple indicating which keyword expects the data. For example, suppose
f
takes its data asarg2
:>>> (df.pipe(h) # doctest: +SKIP ... .pipe(g, arg1=a) ... .pipe((f, 'arg2'), arg1=a, arg3=c) ... )
-
pivot_table
(index=None, columns=None, values=None, aggfunc='mean')¶ Create a spreadsheet-style pivot table as a DataFrame. Target
columns
must have category dtype to infer result’scolumns
.index
,columns
,values
andaggfunc
must be all scalar.Parameters: values : scalar
column to aggregate
index : scalar
column to be index
columns : scalar
column to be columns
aggfunc : {‘mean’, ‘sum’, ‘count’}, default ‘mean’
Returns: table : DataFrame
-
pop
(item)¶ Return item and drop from frame. Raise KeyError if not found.
This docstring was copied from pandas.core.frame.DataFrame.pop.
Some inconsistencies with the Dask version may exist.
Parameters: item : str
Label of column to be popped.
Returns: Series
Examples
>>> df = pd.DataFrame([('falcon', 'bird', 389.0), # doctest: +SKIP ... ('parrot', 'bird', 24.0), ... ('lion', 'mammal', 80.5), ... ('monkey','mammal', np.nan)], ... columns=('name', 'class', 'max_speed')) >>> df # doctest: +SKIP name class max_speed 0 falcon bird 389.0 1 parrot bird 24.0 2 lion mammal 80.5 3 monkey mammal NaN
>>> df.pop('class') # doctest: +SKIP 0 bird 1 bird 2 mammal 3 mammal Name: class, dtype: object
>>> df # doctest: +SKIP name max_speed 0 falcon 389.0 1 parrot 24.0 2 lion 80.5 3 monkey NaN
-
pow
(other, axis='columns', level=None, fill_value=None)¶ Get Exponential power of dataframe and other, element-wise (binary operator pow).
This docstring was copied from pandas.core.frame.DataFrame.pow.
Some inconsistencies with the Dask version may exist.
Equivalent to
dataframe ** other
, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, rpow.Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
fill_value : float or None, default None
Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.
Returns: DataFrame
Result of the arithmetic operation.
See also
DataFrame.add
- Add DataFrames.
DataFrame.sub
- Subtract DataFrames.
DataFrame.mul
- Multiply DataFrames.
DataFrame.div
- Divide DataFrames (float division).
DataFrame.truediv
- Divide DataFrames (float division).
DataFrame.floordiv
- Divide DataFrames (integer division).
DataFrame.mod
- Calculate modulo (remainder after division).
DataFrame.pow
- Calculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4], # doctest: +SKIP ... 'degrees': [360, 180, 360]}, ... index=['circle', 'triangle', 'rectangle']) >>> df # doctest: +SKIP angles degrees circle 0 360 triangle 3 180 rectangle 4 360
Add a scalar with operator version which return the same results.
>>> df + 1 # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
>>> df.add(1) # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10) # doctest: +SKIP angles degrees circle 0.0 36.0 triangle 0.3 18.0 rectangle 0.4 36.0
>>> df.rdiv(10) # doctest: +SKIP angles degrees circle inf 0.027778 triangle 3.333333 0.055556 rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2] # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub([1, 2], axis='columns') # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']), # doctest: +SKIP ... axis='index') angles degrees circle -1 359 triangle 2 179 rectangle 3 359
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]}, # doctest: +SKIP ... index=['circle', 'triangle', 'rectangle']) >>> other # doctest: +SKIP angles circle 0 triangle 3 rectangle 4
>>> df * other # doctest: +SKIP angles degrees circle 0 NaN triangle 9 NaN rectangle 16 NaN
>>> df.mul(other, fill_value=0) # doctest: +SKIP angles degrees circle 0 0.0 triangle 9 0.0 rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6], # doctest: +SKIP ... 'degrees': [360, 180, 360, 360, 540, 720]}, ... index=[['A', 'A', 'A', 'B', 'B', 'B'], ... ['circle', 'triangle', 'rectangle', ... 'square', 'pentagon', 'hexagon']]) >>> df_multindex # doctest: +SKIP angles degrees A circle 0 360 triangle 3 180 rectangle 4 360 B square 4 360 pentagon 5 540 hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0) # doctest: +SKIP angles degrees A circle NaN 1.0 triangle 1.0 1.0 rectangle 1.0 1.0 B square 0.0 0.0 pentagon 0.0 0.0 hexagon 0.0 0.0
-
prod
(axis=None, skipna=True, split_every=False, dtype=None, out=None, min_count=None)¶ Return the product of the values for the requested axis.
This docstring was copied from pandas.core.frame.DataFrame.prod.
Some inconsistencies with the Dask version may exist.
Parameters: axis : {index (0), columns (1)}
Axis for the function to be applied on.
skipna : bool, default True
Exclude NA/null values when computing the result.
level : int or level name, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.
numeric_only : bool, default None (Not supported in Dask)
Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.
min_count : int, default 0
The required number of valid values to perform the operation. If fewer than
min_count
non-NA values are present the result will be NA.New in version 0.22.0: Added with the default being 0. This means the sum of an all-NA or empty Series is 0, and the product of an all-NA or empty Series is 1.
**kwargs
Additional keyword arguments to be passed to the function.
Returns: Series or DataFrame (if level specified)
Examples
By default, the product of an empty or all-NA Series is
1
>>> pd.Series([]).prod() # doctest: +SKIP 1.0
This can be controlled with the
min_count
parameter>>> pd.Series([]).prod(min_count=1) # doctest: +SKIP nan
Thanks to the
skipna
parameter,min_count
handles all-NA and empty series identically.>>> pd.Series([np.nan]).prod() # doctest: +SKIP 1.0
>>> pd.Series([np.nan]).prod(min_count=1) # doctest: +SKIP nan
-
quantile
(q=0.5, axis=0, method='default')¶ Approximate row-wise and precise column-wise quantiles of DataFrame
Parameters: q : list/array of floats, default 0.5 (50%)
Iterable of numbers ranging from 0 to 1 for the desired quantiles
axis : {0, 1, ‘index’, ‘columns’} (default 0)
0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise
method : {‘default’, ‘tdigest’, ‘dask’}, optional
What method to use. By default will use dask’s internal custom algorithm (
'dask'
). If set to'tdigest'
will use tdigest for floats and ints and fallback to the'dask'
otherwise.
-
query
(expr, **kwargs)¶ Filter dataframe with complex expression
Blocked version of pd.DataFrame.query
This is like the sequential version except that this will also happen in many threads. This may conflict with
numexpr
which will use multiple threads itself. We recommend that you set numexpr to use a single threadimport numexpr numexpr.set_num_threads(1)See also
-
radd
(other, axis='columns', level=None, fill_value=None)¶ Get Addition of dataframe and other, element-wise (binary operator radd).
This docstring was copied from pandas.core.frame.DataFrame.radd.
Some inconsistencies with the Dask version may exist.
Equivalent to
other + dataframe
, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, add.Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
fill_value : float or None, default None
Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.
Returns: DataFrame
Result of the arithmetic operation.
See also
DataFrame.add
- Add DataFrames.
DataFrame.sub
- Subtract DataFrames.
DataFrame.mul
- Multiply DataFrames.
DataFrame.div
- Divide DataFrames (float division).
DataFrame.truediv
- Divide DataFrames (float division).
DataFrame.floordiv
- Divide DataFrames (integer division).
DataFrame.mod
- Calculate modulo (remainder after division).
DataFrame.pow
- Calculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4], # doctest: +SKIP ... 'degrees': [360, 180, 360]}, ... index=['circle', 'triangle', 'rectangle']) >>> df # doctest: +SKIP angles degrees circle 0 360 triangle 3 180 rectangle 4 360
Add a scalar with operator version which return the same results.
>>> df + 1 # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
>>> df.add(1) # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10) # doctest: +SKIP angles degrees circle 0.0 36.0 triangle 0.3 18.0 rectangle 0.4 36.0
>>> df.rdiv(10) # doctest: +SKIP angles degrees circle inf 0.027778 triangle 3.333333 0.055556 rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2] # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub([1, 2], axis='columns') # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']), # doctest: +SKIP ... axis='index') angles degrees circle -1 359 triangle 2 179 rectangle 3 359
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]}, # doctest: +SKIP ... index=['circle', 'triangle', 'rectangle']) >>> other # doctest: +SKIP angles circle 0 triangle 3 rectangle 4
>>> df * other # doctest: +SKIP angles degrees circle 0 NaN triangle 9 NaN rectangle 16 NaN
>>> df.mul(other, fill_value=0) # doctest: +SKIP angles degrees circle 0 0.0 triangle 9 0.0 rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6], # doctest: +SKIP ... 'degrees': [360, 180, 360, 360, 540, 720]}, ... index=[['A', 'A', 'A', 'B', 'B', 'B'], ... ['circle', 'triangle', 'rectangle', ... 'square', 'pentagon', 'hexagon']]) >>> df_multindex # doctest: +SKIP angles degrees A circle 0 360 triangle 3 180 rectangle 4 360 B square 4 360 pentagon 5 540 hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0) # doctest: +SKIP angles degrees A circle NaN 1.0 triangle 1.0 1.0 rectangle 1.0 1.0 B square 0.0 0.0 pentagon 0.0 0.0 hexagon 0.0 0.0
-
random_split
(frac, random_state=None)¶ Pseudorandomly split dataframe into different pieces row-wise
Parameters: frac : list
List of floats that should sum to one.
random_state: int or np.random.RandomState
If int create a new RandomState with this as the seed
Otherwise draw from the passed RandomState
See also
dask.DataFrame.sample
Examples
50/50 split
>>> a, b = df.random_split([0.5, 0.5]) # doctest: +SKIP
80/10/10 split, consistent random_state
>>> a, b, c = df.random_split([0.8, 0.1, 0.1], random_state=123) # doctest: +SKIP
-
rdiv
(other, axis='columns', level=None, fill_value=None)¶ Get Floating division of dataframe and other, element-wise (binary operator rtruediv).
This docstring was copied from pandas.core.frame.DataFrame.rdiv.
Some inconsistencies with the Dask version may exist.
Equivalent to
other / dataframe
, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, truediv.Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
fill_value : float or None, default None
Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.
Returns: DataFrame
Result of the arithmetic operation.
See also
DataFrame.add
- Add DataFrames.
DataFrame.sub
- Subtract DataFrames.
DataFrame.mul
- Multiply DataFrames.
DataFrame.div
- Divide DataFrames (float division).
DataFrame.truediv
- Divide DataFrames (float division).
DataFrame.floordiv
- Divide DataFrames (integer division).
DataFrame.mod
- Calculate modulo (remainder after division).
DataFrame.pow
- Calculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4], # doctest: +SKIP ... 'degrees': [360, 180, 360]}, ... index=['circle', 'triangle', 'rectangle']) >>> df # doctest: +SKIP angles degrees circle 0 360 triangle 3 180 rectangle 4 360
Add a scalar with operator version which return the same results.
>>> df + 1 # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
>>> df.add(1) # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10) # doctest: +SKIP angles degrees circle 0.0 36.0 triangle 0.3 18.0 rectangle 0.4 36.0
>>> df.rdiv(10) # doctest: +SKIP angles degrees circle inf 0.027778 triangle 3.333333 0.055556 rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2] # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub([1, 2], axis='columns') # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']), # doctest: +SKIP ... axis='index') angles degrees circle -1 359 triangle 2 179 rectangle 3 359
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]}, # doctest: +SKIP ... index=['circle', 'triangle', 'rectangle']) >>> other # doctest: +SKIP angles circle 0 triangle 3 rectangle 4
>>> df * other # doctest: +SKIP angles degrees circle 0 NaN triangle 9 NaN rectangle 16 NaN
>>> df.mul(other, fill_value=0) # doctest: +SKIP angles degrees circle 0 0.0 triangle 9 0.0 rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6], # doctest: +SKIP ... 'degrees': [360, 180, 360, 360, 540, 720]}, ... index=[['A', 'A', 'A', 'B', 'B', 'B'], ... ['circle', 'triangle', 'rectangle', ... 'square', 'pentagon', 'hexagon']]) >>> df_multindex # doctest: +SKIP angles degrees A circle 0 360 triangle 3 180 rectangle 4 360 B square 4 360 pentagon 5 540 hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0) # doctest: +SKIP angles degrees A circle NaN 1.0 triangle 1.0 1.0 rectangle 1.0 1.0 B square 0.0 0.0 pentagon 0.0 0.0 hexagon 0.0 0.0
-
reduction
(chunk, aggregate=None, combine=None, meta='__no_default__', token=None, split_every=None, chunk_kwargs=None, aggregate_kwargs=None, combine_kwargs=None, **kwargs)¶ Generic row-wise reductions.
Parameters: chunk : callable
Function to operate on each partition. Should return a
pandas.DataFrame
,pandas.Series
, or a scalar.aggregate : callable, optional
Function to operate on the concatenated result of
chunk
. If not specified, defaults tochunk
. Used to do the final aggregation in a tree reduction.The input to
aggregate
depends on the output ofchunk
. If the output ofchunk
is a:- scalar: Input is a Series, with one row per partition.
- Series: Input is a DataFrame, with one row per partition. Columns are the rows in the output series.
- DataFrame: Input is a DataFrame, with one row per partition. Columns are the columns in the output dataframes.
Should return a
pandas.DataFrame
,pandas.Series
, or a scalar.combine : callable, optional
Function to operate on intermediate concatenated results of
chunk
in a tree-reduction. If not provided, defaults toaggregate
. The input/output requirements should match that ofaggregate
described above.meta : pd.DataFrame, pd.Series, dict, iterable, tuple, optional
An empty
pd.DataFrame
orpd.Series
that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of aDataFrame
, adict
of{name: dtype}
or iterable of(name, dtype)
can be provided (note that the order of the names should match the order of the columns). Instead of a series, a tuple of(name, dtype)
can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providingmeta
is recommended. For more information, seedask.dataframe.utils.make_meta
.token : str, optional
The name to use for the output keys.
split_every : int, optional
Group partitions into groups of this size while performing a tree-reduction. If set to False, no tree-reduction will be used, and all intermediates will be concatenated and passed to
aggregate
. Default is 8.chunk_kwargs : dict, optional
Keyword arguments to pass on to
chunk
only.aggregate_kwargs : dict, optional
Keyword arguments to pass on to
aggregate
only.combine_kwargs : dict, optional
Keyword arguments to pass on to
combine
only.kwargs :
All remaining keywords will be passed to
chunk
,combine
, andaggregate
.Examples
>>> import pandas as pd >>> import dask.dataframe as dd >>> df = pd.DataFrame({'x': range(50), 'y': range(50, 100)}) >>> ddf = dd.from_pandas(df, npartitions=4)
Count the number of rows in a DataFrame. To do this, count the number of rows in each partition, then sum the results:
>>> res = ddf.reduction(lambda x: x.count(), ... aggregate=lambda x: x.sum()) >>> res.compute() x 50 y 50 dtype: int64
Count the number of rows in a Series with elements greater than or equal to a value (provided via a keyword).
>>> def count_greater(x, value=0): ... return (x >= value).sum() >>> res = ddf.x.reduction(count_greater, aggregate=lambda x: x.sum(), ... chunk_kwargs={'value': 25}) >>> res.compute() 25
Aggregate both the sum and count of a Series at the same time:
>>> def sum_and_count(x): ... return pd.Series({'count': x.count(), 'sum': x.sum()}, ... index=['count', 'sum']) >>> res = ddf.x.reduction(sum_and_count, aggregate=lambda x: x.sum()) >>> res.compute() count 50 sum 1225 dtype: int64
Doing the same, but for a DataFrame. Here
chunk
returns a DataFrame, meaning the input toaggregate
is a DataFrame with an index with non-unique entries for both ‘x’ and ‘y’. We groupby the index, and sum each group to get the final result.>>> def sum_and_count(x): ... return pd.DataFrame({'count': x.count(), 'sum': x.sum()}, ... columns=['count', 'sum']) >>> res = ddf.reduction(sum_and_count, ... aggregate=lambda x: x.groupby(level=0).sum()) >>> res.compute() count sum x 50 1225 y 50 3725
-
rename
(index=None, columns=None)¶ Alter axes labels.
This docstring was copied from pandas.core.frame.DataFrame.rename.
Some inconsistencies with the Dask version may exist.
Function / dict values must be unique (1-to-1). Labels not contained in a dict / Series will be left as-is. Extra labels listed don’t throw an error.
See the user guide for more.
Parameters: mapper : dict-like or function (Not supported in Dask)
Dict-like or functions transformations to apply to that axis’ values. Use either
mapper
andaxis
to specify the axis to target withmapper
, orindex
andcolumns
.index : dict-like or function (Not supported in Dask)
Alternative to specifying axis (
mapper, axis=0
is equivalent toindex=mapper
).columns : dict-like or function
Alternative to specifying axis (
mapper, axis=1
is equivalent tocolumns=mapper
).axis : int or str (Not supported in Dask)
Axis to target with
mapper
. Can be either the axis name (‘index’, ‘columns’) or number (0, 1). The default is ‘index’.copy : bool, default True (Not supported in Dask)
Also copy underlying data.
inplace : bool, default False (Not supported in Dask)
Whether to return a new DataFrame. If True then value of copy is ignored.
level : int or level name, default None (Not supported in Dask)
In case of a MultiIndex, only rename labels in the specified level.
errors : {‘ignore’, ‘raise’}, default ‘ignore’ (Not supported in Dask)
If ‘raise’, raise a KeyError when a dict-like mapper, index, or columns contains labels that are not present in the Index being transformed. If ‘ignore’, existing keys will be renamed and extra keys will be ignored.
Returns: DataFrame
DataFrame with the renamed axis labels.
Raises: KeyError
If any of the labels is not found in the selected axis and “errors=’raise’”.
See also
DataFrame.rename_axis
- Set the name of the axis.
Examples
DataFrame.rename
supports two calling conventions(index=index_mapper, columns=columns_mapper, ...)
(mapper, axis={'index', 'columns'}, ...)
We highly recommend using keyword arguments to clarify your intent.
Rename columns using a mapping:
>>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}) # doctest: +SKIP >>> df.rename(columns={"A": "a", "B": "c"}) # doctest: +SKIP a c 0 1 4 1 2 5 2 3 6
Rename index using a mapping:
>>> df.rename(index={0: "x", 1: "y", 2: "z"}) # doctest: +SKIP A B x 1 4 y 2 5 z 3 6
Cast index labels to a different type:
>>> df.index # doctest: +SKIP RangeIndex(start=0, stop=3, step=1) >>> df.rename(index=str).index # doctest: +SKIP Index(['0', '1', '2'], dtype='object')
>>> df.rename(columns={"A": "a", "B": "b", "C": "c"}, errors="raise") # doctest: +SKIP Traceback (most recent call last): KeyError: ['C'] not found in axis
Using axis-style parameters
>>> df.rename(str.lower, axis='columns') # doctest: +SKIP a b 0 1 4 1 2 5 2 3 6
>>> df.rename({1: 2, 2: 4}, axis='index') # doctest: +SKIP A B 0 1 4 2 2 5 4 3 6
-
repartition
(divisions=None, npartitions=None, partition_size=None, freq=None, force=False)¶ Repartition dataframe along new divisions
Parameters: divisions : list, optional
List of partitions to be used. Only used if npartitions and partition_size isn’t specified.
npartitions : int, optional
Number of partitions of output. Only used if partition_size isn’t specified.
partition_size: int or string, optional
Max number of bytes of memory for each partition. Use numbers or strings like 5MB. If specified npartitions and divisions will be ignored.
Warning
This keyword argument triggers computation to determine the memory size of each partition, which may be expensive.
freq : str, pd.Timedelta
A period on which to partition timeseries data like
'7D'
or'12h'
orpd.Timedelta(hours=12)
. Assumes a datetime index.force : bool, default False
Allows the expansion of the existing divisions. If False then the new divisions lower and upper bounds must be the same as the old divisions.
Notes
Exactly one of divisions, npartitions, partition_size, or freq should be specified. A
ValueError
will be raised when that is not the case.Examples
>>> df = df.repartition(npartitions=10) # doctest: +SKIP >>> df = df.repartition(divisions=[0, 5, 10, 20]) # doctest: +SKIP >>> df = df.repartition(freq='7d') # doctest: +SKIP
-
replace
(to_replace=None, value=None, regex=False)¶ Replace values given in to_replace with value.
This docstring was copied from pandas.core.frame.DataFrame.replace.
Some inconsistencies with the Dask version may exist.
Values of the DataFrame are replaced with other values dynamically. This differs from updating with
.loc
or.iloc
, which require you to specify a location to update with some value.Parameters: to_replace : str, regex, list, dict, Series, int, float, or None
How to find the values that will be replaced.
numeric, str or regex:
- numeric: numeric values equal to to_replace will be replaced with value
- str: string exactly matching to_replace will be replaced with value
- regex: regexs matching to_replace will be replaced with value
list of str, regex, or numeric:
- First, if to_replace and value are both lists, they must be the same length.
- Second, if
regex=True
then all of the strings in both lists will be interpreted as regexs otherwise they will match directly. This doesn’t matter much for value since there are only a few possible substitution regexes you can use. - str, regex and numeric rules apply as above.
dict:
- Dicts can be used to specify different replacement values
for different existing values. For example,
{'a': 'b', 'y': 'z'}
replaces the value ‘a’ with ‘b’ and ‘y’ with ‘z’. To use a dict in this way the value parameter should be None. - For a DataFrame a dict can specify that different values
should be replaced in different columns. For example,
{'a': 1, 'b': 'z'}
looks for the value 1 in column ‘a’ and the value ‘z’ in column ‘b’ and replaces these values with whatever is specified in value. The value parameter should not beNone
in this case. You can treat this as a special case of passing two lists except that you are specifying the column to search in. - For a DataFrame nested dictionaries, e.g.,
{'a': {'b': np.nan}}
, are read as follows: look in column ‘a’ for the value ‘b’ and replace it with NaN. The value parameter should beNone
to use a nested dict in this way. You can nest regular expressions as well. Note that column names (the top-level dictionary keys in a nested dictionary) cannot be regular expressions.
- Dicts can be used to specify different replacement values
for different existing values. For example,
None:
- This means that the regex argument must be a string,
compiled regular expression, or list, dict, ndarray or
Series of such elements. If value is also
None
then this must be a nested dictionary or Series.
- This means that the regex argument must be a string,
compiled regular expression, or list, dict, ndarray or
Series of such elements. If value is also
See the examples section for examples of each of these.
value : scalar, dict, list, str, regex, default None
Value to replace any values matching to_replace with. For a DataFrame a dict of values can be used to specify which value to use for each column (columns not in the dict will not be filled). Regular expressions, strings and lists or dicts of such objects are also allowed.
inplace : bool, default False (Not supported in Dask)
If True, in place. Note: this will modify any other views on this object (e.g. a column from a DataFrame). Returns the caller if this is True.
limit : int, default None (Not supported in Dask)
Maximum size gap to forward or backward fill.
regex : bool or same types as to_replace, default False
Whether to interpret to_replace and/or value as regular expressions. If this is
True
then to_replace must be a string. Alternatively, this could be a regular expression or a list, dict, or array of regular expressions in which case to_replace must beNone
.method : {‘pad’, ‘ffill’, ‘bfill’, None} (Not supported in Dask)
The method to use when for replacement, when to_replace is a scalar, list or tuple and value is
None
.Changed in version 0.23.0: Added to DataFrame.
Returns: DataFrame
Object after replacement.
Raises: AssertionError
- If regex is not a
bool
and to_replace is notNone
.
TypeError
- If to_replace is a
dict
and value is not alist
,dict
,ndarray
, orSeries
- If to_replace is
None
and regex is not compilable into a regular expression or is a list, dict, ndarray, or Series. - When replacing multiple
bool
ordatetime64
objects and the arguments to to_replace does not match the type of the value being replaced
ValueError
- If a
list
or anndarray
is passed to to_replace and value but they are not the same length.
See also
DataFrame.fillna
- Fill NA values.
DataFrame.where
- Replace values based on boolean condition.
Series.str.replace
- Simple string replacement.
Notes
- Regex substitution is performed under the hood with
re.sub
. The rules for substitution forre.sub
are the same. - Regular expressions will only substitute on strings, meaning you cannot provide, for example, a regular expression matching floating point numbers and expect the columns in your frame that have a numeric dtype to be matched. However, if those floating point numbers are strings, then you can do this.
- This method has a lot of options. You are encouraged to experiment and play with this method to gain intuition about how it works.
- When dict is used as the to_replace value, it is like key(s) in the dict are the to_replace part and value(s) in the dict are the value parameter.
Examples
Scalar `to_replace` and `value`
>>> s = pd.Series([0, 1, 2, 3, 4]) # doctest: +SKIP >>> s.replace(0, 5) # doctest: +SKIP 0 5 1 1 2 2 3 3 4 4 dtype: int64
>>> df = pd.DataFrame({'A': [0, 1, 2, 3, 4], # doctest: +SKIP ... 'B': [5, 6, 7, 8, 9], ... 'C': ['a', 'b', 'c', 'd', 'e']}) >>> df.replace(0, 5) # doctest: +SKIP A B C 0 5 5 a 1 1 6 b 2 2 7 c 3 3 8 d 4 4 9 e
List-like `to_replace`
>>> df.replace([0, 1, 2, 3], 4) # doctest: +SKIP A B C 0 4 5 a 1 4 6 b 2 4 7 c 3 4 8 d 4 4 9 e
>>> df.replace([0, 1, 2, 3], [4, 3, 2, 1]) # doctest: +SKIP A B C 0 4 5 a 1 3 6 b 2 2 7 c 3 1 8 d 4 4 9 e
>>> s.replace([1, 2], method='bfill') # doctest: +SKIP 0 0 1 3 2 3 3 3 4 4 dtype: int64
dict-like `to_replace`
>>> df.replace({0: 10, 1: 100}) # doctest: +SKIP A B C 0 10 5 a 1 100 6 b 2 2 7 c 3 3 8 d 4 4 9 e
>>> df.replace({'A': 0, 'B': 5}, 100) # doctest: +SKIP A B C 0 100 100 a 1 1 6 b 2 2 7 c 3 3 8 d 4 4 9 e
>>> df.replace({'A': {0: 100, 4: 400}}) # doctest: +SKIP A B C 0 100 5 a 1 1 6 b 2 2 7 c 3 3 8 d 4 400 9 e
Regular expression `to_replace`
>>> df = pd.DataFrame({'A': ['bat', 'foo', 'bait'], # doctest: +SKIP ... 'B': ['abc', 'bar', 'xyz']}) >>> df.replace(to_replace=r'^ba.$', value='new', regex=True) # doctest: +SKIP A B 0 new abc 1 foo new 2 bait xyz
>>> df.replace({'A': r'^ba.$'}, {'A': 'new'}, regex=True) # doctest: +SKIP A B 0 new abc 1 foo bar 2 bait xyz
>>> df.replace(regex=r'^ba.$', value='new') # doctest: +SKIP A B 0 new abc 1 foo new 2 bait xyz
>>> df.replace(regex={r'^ba.$': 'new', 'foo': 'xyz'}) # doctest: +SKIP A B 0 new abc 1 xyz new 2 bait xyz
>>> df.replace(regex=[r'^ba.$', 'foo'], value='new') # doctest: +SKIP A B 0 new abc 1 new new 2 bait xyz
Note that when replacing multiple
bool
ordatetime64
objects, the data types in the to_replace parameter must match the data type of the value being replaced:>>> df = pd.DataFrame({'A': [True, False, True], # doctest: +SKIP ... 'B': [False, True, False]}) >>> df.replace({'a string': 'new value', True: False}) # raises # doctest: +SKIP Traceback (most recent call last): ... TypeError: Cannot compare types 'ndarray(dtype=bool)' and 'str'
This raises a
TypeError
because one of thedict
keys is not of the correct type for replacement.Compare the behavior of
s.replace({'a': None})
ands.replace('a', None)
to understand the peculiarities of the to_replace parameter:>>> s = pd.Series([10, 'a', 'a', 'b', 'a']) # doctest: +SKIP
When one uses a dict as the to_replace value, it is like the value(s) in the dict are equal to the value parameter.
s.replace({'a': None})
is equivalent tos.replace(to_replace={'a': None}, value=None, method=None)
:>>> s.replace({'a': None}) # doctest: +SKIP 0 10 1 None 2 None 3 b 4 None dtype: object
When
value=None
and to_replace is a scalar, list or tuple, replace uses the method parameter (default ‘pad’) to do the replacement. So this is why the ‘a’ values are being replaced by 10 in rows 1 and 2 and ‘b’ in row 4 in this case. The commands.replace('a', None)
is actually equivalent tos.replace(to_replace='a', value=None, method='pad')
:>>> s.replace('a', None) # doctest: +SKIP 0 10 1 10 2 10 3 b 4 b dtype: object
-
resample
(rule, closed=None, label=None)¶ Resample time-series data.
This docstring was copied from pandas.core.frame.DataFrame.resample.
Some inconsistencies with the Dask version may exist.
Convenience method for frequency conversion and resampling of time series. Object must have a datetime-like index (DatetimeIndex, PeriodIndex, or TimedeltaIndex), or pass datetime-like values to the on or level keyword.
Parameters: rule : DateOffset, Timedelta or str
The offset string or object representing target conversion.
how : str (Not supported in Dask)
Method for down/re-sampling, default to ‘mean’ for downsampling.
Deprecated since version 0.18.0: The new syntax is
.resample(...).mean()
, or.resample(...).apply(<func>)
axis : {0 or ‘index’, 1 or ‘columns’}, default 0 (Not supported in Dask)
Which axis to use for up- or down-sampling. For Series this will default to 0, i.e. along the rows. Must be DatetimeIndex, TimedeltaIndex or PeriodIndex.
fill_method : str, default None (Not supported in Dask)
Filling method for upsampling.
Deprecated since version 0.18.0: The new syntax is
.resample(...).<func>()
, e.g..resample(...).pad()
closed : {‘right’, ‘left’}, default None
Which side of bin interval is closed. The default is ‘left’ for all frequency offsets except for ‘M’, ‘A’, ‘Q’, ‘BM’, ‘BA’, ‘BQ’, and ‘W’ which all have a default of ‘right’.
label : {‘right’, ‘left’}, default None
Which bin edge label to label bucket with. The default is ‘left’ for all frequency offsets except for ‘M’, ‘A’, ‘Q’, ‘BM’, ‘BA’, ‘BQ’, and ‘W’ which all have a default of ‘right’.
convention : {‘start’, ‘end’, ‘s’, ‘e’}, default ‘start’ (Not supported in Dask)
For PeriodIndex only, controls whether to use the start or end of rule.
kind : {‘timestamp’, ‘period’}, optional, default None (Not supported in Dask)
Pass ‘timestamp’ to convert the resulting index to a DateTimeIndex or ‘period’ to convert it to a PeriodIndex. By default the input representation is retained.
loffset : timedelta, default None (Not supported in Dask)
Adjust the resampled time labels.
limit : int, default None (Not supported in Dask)
Maximum size gap when reindexing with fill_method.
Deprecated since version 0.18.0.
base : int, default 0 (Not supported in Dask)
For frequencies that evenly subdivide 1 day, the “origin” of the aggregated intervals. For example, for ‘5min’ frequency, base could range from 0 through 4. Defaults to 0.
on : str, optional (Not supported in Dask)
For a DataFrame, column to use instead of index for resampling. Column must be datetime-like.
New in version 0.19.0.
level : str or int, optional (Not supported in Dask)
For a MultiIndex, level (name or number) to use for resampling. level must be datetime-like.
New in version 0.19.0.
Returns: Resampler object
See also
groupby
- Group by mapping, function, label, or list of labels.
Series.resample
- Resample a Series.
DataFrame.resample
- Resample a DataFrame.
Notes
See the user guide for more.
To learn more about the offset strings, please see this link.
Examples
Start by creating a series with 9 one minute timestamps.
>>> index = pd.date_range('1/1/2000', periods=9, freq='T') # doctest: +SKIP >>> series = pd.Series(range(9), index=index) # doctest: +SKIP >>> series # doctest: +SKIP 2000-01-01 00:00:00 0 2000-01-01 00:01:00 1 2000-01-01 00:02:00 2 2000-01-01 00:03:00 3 2000-01-01 00:04:00 4 2000-01-01 00:05:00 5 2000-01-01 00:06:00 6 2000-01-01 00:07:00 7 2000-01-01 00:08:00 8 Freq: T, dtype: int64
Downsample the series into 3 minute bins and sum the values of the timestamps falling into a bin.
>>> series.resample('3T').sum() # doctest: +SKIP 2000-01-01 00:00:00 3 2000-01-01 00:03:00 12 2000-01-01 00:06:00 21 Freq: 3T, dtype: int64
Downsample the series into 3 minute bins as above, but label each bin using the right edge instead of the left. Please note that the value in the bucket used as the label is not included in the bucket, which it labels. For example, in the original series the bucket
2000-01-01 00:03:00
contains the value 3, but the summed value in the resampled bucket with the label2000-01-01 00:03:00
does not include 3 (if it did, the summed value would be 6, not 3). To include this value close the right side of the bin interval as illustrated in the example below this one.>>> series.resample('3T', label='right').sum() # doctest: +SKIP 2000-01-01 00:03:00 3 2000-01-01 00:06:00 12 2000-01-01 00:09:00 21 Freq: 3T, dtype: int64
Downsample the series into 3 minute bins as above, but close the right side of the bin interval.
>>> series.resample('3T', label='right', closed='right').sum() # doctest: +SKIP 2000-01-01 00:00:00 0 2000-01-01 00:03:00 6 2000-01-01 00:06:00 15 2000-01-01 00:09:00 15 Freq: 3T, dtype: int64
Upsample the series into 30 second bins.
>>> series.resample('30S').asfreq()[0:5] # Select first 5 rows # doctest: +SKIP 2000-01-01 00:00:00 0.0 2000-01-01 00:00:30 NaN 2000-01-01 00:01:00 1.0 2000-01-01 00:01:30 NaN 2000-01-01 00:02:00 2.0 Freq: 30S, dtype: float64
Upsample the series into 30 second bins and fill the
NaN
values using thepad
method.>>> series.resample('30S').pad()[0:5] # doctest: +SKIP 2000-01-01 00:00:00 0 2000-01-01 00:00:30 0 2000-01-01 00:01:00 1 2000-01-01 00:01:30 1 2000-01-01 00:02:00 2 Freq: 30S, dtype: int64
Upsample the series into 30 second bins and fill the
NaN
values using thebfill
method.>>> series.resample('30S').bfill()[0:5] # doctest: +SKIP 2000-01-01 00:00:00 0 2000-01-01 00:00:30 1 2000-01-01 00:01:00 1 2000-01-01 00:01:30 2 2000-01-01 00:02:00 2 Freq: 30S, dtype: int64
Pass a custom function via
apply
>>> def custom_resampler(array_like): # doctest: +SKIP ... return np.sum(array_like) + 5 ... >>> series.resample('3T').apply(custom_resampler) # doctest: +SKIP 2000-01-01 00:00:00 8 2000-01-01 00:03:00 17 2000-01-01 00:06:00 26 Freq: 3T, dtype: int64
For a Series with a PeriodIndex, the keyword convention can be used to control whether to use the start or end of rule.
Resample a year by quarter using ‘start’ convention. Values are assigned to the first quarter of the period.
>>> s = pd.Series([1, 2], index=pd.period_range('2012-01-01', # doctest: +SKIP ... freq='A', ... periods=2)) >>> s # doctest: +SKIP 2012 1 2013 2 Freq: A-DEC, dtype: int64 >>> s.resample('Q', convention='start').asfreq() # doctest: +SKIP 2012Q1 1.0 2012Q2 NaN 2012Q3 NaN 2012Q4 NaN 2013Q1 2.0 2013Q2 NaN 2013Q3 NaN 2013Q4 NaN Freq: Q-DEC, dtype: float64
Resample quarters by month using ‘end’ convention. Values are assigned to the last month of the period.
>>> q = pd.Series([1, 2, 3, 4], index=pd.period_range('2018-01-01', # doctest: +SKIP ... freq='Q', ... periods=4)) >>> q # doctest: +SKIP 2018Q1 1 2018Q2 2 2018Q3 3 2018Q4 4 Freq: Q-DEC, dtype: int64 >>> q.resample('M', convention='end').asfreq() # doctest: +SKIP 2018-03 1.0 2018-04 NaN 2018-05 NaN 2018-06 2.0 2018-07 NaN 2018-08 NaN 2018-09 3.0 2018-10 NaN 2018-11 NaN 2018-12 4.0 Freq: M, dtype: float64
For DataFrame objects, the keyword on can be used to specify the column instead of the index for resampling.
>>> d = dict({'price': [10, 11, 9, 13, 14, 18, 17, 19], # doctest: +SKIP ... 'volume': [50, 60, 40, 100, 50, 100, 40, 50]}) >>> df = pd.DataFrame(d) # doctest: +SKIP >>> df['week_starting'] = pd.date_range('01/01/2018', # doctest: +SKIP ... periods=8, ... freq='W') >>> df # doctest: +SKIP price volume week_starting 0 10 50 2018-01-07 1 11 60 2018-01-14 2 9 40 2018-01-21 3 13 100 2018-01-28 4 14 50 2018-02-04 5 18 100 2018-02-11 6 17 40 2018-02-18 7 19 50 2018-02-25 >>> df.resample('M', on='week_starting').mean() # doctest: +SKIP price volume week_starting 2018-01-31 10.75 62.5 2018-02-28 17.00 60.0
For a DataFrame with MultiIndex, the keyword level can be used to specify on which level the resampling needs to take place.
>>> days = pd.date_range('1/1/2000', periods=4, freq='D') # doctest: +SKIP >>> d2 = dict({'price': [10, 11, 9, 13, 14, 18, 17, 19], # doctest: +SKIP ... 'volume': [50, 60, 40, 100, 50, 100, 40, 50]}) >>> df2 = pd.DataFrame(d2, # doctest: +SKIP ... index=pd.MultiIndex.from_product([days, ... ['morning', ... 'afternoon']] ... )) >>> df2 # doctest: +SKIP price volume 2000-01-01 morning 10 50 afternoon 11 60 2000-01-02 morning 9 40 afternoon 13 100 2000-01-03 morning 14 50 afternoon 18 100 2000-01-04 morning 17 40 afternoon 19 50 >>> df2.resample('D', level=0).sum() # doctest: +SKIP price volume 2000-01-01 21 110 2000-01-02 22 140 2000-01-03 32 150 2000-01-04 36 90
-
reset_index
(drop=False)¶ Reset the index to the default index.
Note that unlike in
pandas
, the resetdask.dataframe
index will not be monotonically increasing from 0. Instead, it will restart at 0 for each partition (e.g.index1 = [0, ..., 10], index2 = [0, ...]
). This is due to the inability to statically know the full length of the index.For DataFrame with multi-level index, returns a new DataFrame with labeling information in the columns under the index names, defaulting to ‘level_0’, ‘level_1’, etc. if any are None. For a standard index, the index name will be used (if set), otherwise a default ‘index’ or ‘level_0’ (if ‘index’ is already taken) will be used.
Parameters: drop : boolean, default False
Do not try to insert index into dataframe columns.
-
rfloordiv
(other, axis='columns', level=None, fill_value=None)¶ Get Integer division of dataframe and other, element-wise (binary operator rfloordiv).
This docstring was copied from pandas.core.frame.DataFrame.rfloordiv.
Some inconsistencies with the Dask version may exist.
Equivalent to
other // dataframe
, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, floordiv.Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
fill_value : float or None, default None
Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.
Returns: DataFrame
Result of the arithmetic operation.
See also
DataFrame.add
- Add DataFrames.
DataFrame.sub
- Subtract DataFrames.
DataFrame.mul
- Multiply DataFrames.
DataFrame.div
- Divide DataFrames (float division).
DataFrame.truediv
- Divide DataFrames (float division).
DataFrame.floordiv
- Divide DataFrames (integer division).
DataFrame.mod
- Calculate modulo (remainder after division).
DataFrame.pow
- Calculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4], # doctest: +SKIP ... 'degrees': [360, 180, 360]}, ... index=['circle', 'triangle', 'rectangle']) >>> df # doctest: +SKIP angles degrees circle 0 360 triangle 3 180 rectangle 4 360
Add a scalar with operator version which return the same results.
>>> df + 1 # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
>>> df.add(1) # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10) # doctest: +SKIP angles degrees circle 0.0 36.0 triangle 0.3 18.0 rectangle 0.4 36.0
>>> df.rdiv(10) # doctest: +SKIP angles degrees circle inf 0.027778 triangle 3.333333 0.055556 rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2] # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub([1, 2], axis='columns') # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']), # doctest: +SKIP ... axis='index') angles degrees circle -1 359 triangle 2 179 rectangle 3 359
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]}, # doctest: +SKIP ... index=['circle', 'triangle', 'rectangle']) >>> other # doctest: +SKIP angles circle 0 triangle 3 rectangle 4
>>> df * other # doctest: +SKIP angles degrees circle 0 NaN triangle 9 NaN rectangle 16 NaN
>>> df.mul(other, fill_value=0) # doctest: +SKIP angles degrees circle 0 0.0 triangle 9 0.0 rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6], # doctest: +SKIP ... 'degrees': [360, 180, 360, 360, 540, 720]}, ... index=[['A', 'A', 'A', 'B', 'B', 'B'], ... ['circle', 'triangle', 'rectangle', ... 'square', 'pentagon', 'hexagon']]) >>> df_multindex # doctest: +SKIP angles degrees A circle 0 360 triangle 3 180 rectangle 4 360 B square 4 360 pentagon 5 540 hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0) # doctest: +SKIP angles degrees A circle NaN 1.0 triangle 1.0 1.0 rectangle 1.0 1.0 B square 0.0 0.0 pentagon 0.0 0.0 hexagon 0.0 0.0
-
rmod
(other, axis='columns', level=None, fill_value=None)¶ Get Modulo of dataframe and other, element-wise (binary operator rmod).
This docstring was copied from pandas.core.frame.DataFrame.rmod.
Some inconsistencies with the Dask version may exist.
Equivalent to
other % dataframe
, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, mod.Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
fill_value : float or None, default None
Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.
Returns: DataFrame
Result of the arithmetic operation.
See also
DataFrame.add
- Add DataFrames.
DataFrame.sub
- Subtract DataFrames.
DataFrame.mul
- Multiply DataFrames.
DataFrame.div
- Divide DataFrames (float division).
DataFrame.truediv
- Divide DataFrames (float division).
DataFrame.floordiv
- Divide DataFrames (integer division).
DataFrame.mod
- Calculate modulo (remainder after division).
DataFrame.pow
- Calculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4], # doctest: +SKIP ... 'degrees': [360, 180, 360]}, ... index=['circle', 'triangle', 'rectangle']) >>> df # doctest: +SKIP angles degrees circle 0 360 triangle 3 180 rectangle 4 360
Add a scalar with operator version which return the same results.
>>> df + 1 # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
>>> df.add(1) # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10) # doctest: +SKIP angles degrees circle 0.0 36.0 triangle 0.3 18.0 rectangle 0.4 36.0
>>> df.rdiv(10) # doctest: +SKIP angles degrees circle inf 0.027778 triangle 3.333333 0.055556 rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2] # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub([1, 2], axis='columns') # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']), # doctest: +SKIP ... axis='index') angles degrees circle -1 359 triangle 2 179 rectangle 3 359
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]}, # doctest: +SKIP ... index=['circle', 'triangle', 'rectangle']) >>> other # doctest: +SKIP angles circle 0 triangle 3 rectangle 4
>>> df * other # doctest: +SKIP angles degrees circle 0 NaN triangle 9 NaN rectangle 16 NaN
>>> df.mul(other, fill_value=0) # doctest: +SKIP angles degrees circle 0 0.0 triangle 9 0.0 rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6], # doctest: +SKIP ... 'degrees': [360, 180, 360, 360, 540, 720]}, ... index=[['A', 'A', 'A', 'B', 'B', 'B'], ... ['circle', 'triangle', 'rectangle', ... 'square', 'pentagon', 'hexagon']]) >>> df_multindex # doctest: +SKIP angles degrees A circle 0 360 triangle 3 180 rectangle 4 360 B square 4 360 pentagon 5 540 hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0) # doctest: +SKIP angles degrees A circle NaN 1.0 triangle 1.0 1.0 rectangle 1.0 1.0 B square 0.0 0.0 pentagon 0.0 0.0 hexagon 0.0 0.0
-
rmul
(other, axis='columns', level=None, fill_value=None)¶ Get Multiplication of dataframe and other, element-wise (binary operator rmul).
This docstring was copied from pandas.core.frame.DataFrame.rmul.
Some inconsistencies with the Dask version may exist.
Equivalent to
other * dataframe
, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, mul.Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
fill_value : float or None, default None
Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.
Returns: DataFrame
Result of the arithmetic operation.
See also
DataFrame.add
- Add DataFrames.
DataFrame.sub
- Subtract DataFrames.
DataFrame.mul
- Multiply DataFrames.
DataFrame.div
- Divide DataFrames (float division).
DataFrame.truediv
- Divide DataFrames (float division).
DataFrame.floordiv
- Divide DataFrames (integer division).
DataFrame.mod
- Calculate modulo (remainder after division).
DataFrame.pow
- Calculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4], # doctest: +SKIP ... 'degrees': [360, 180, 360]}, ... index=['circle', 'triangle', 'rectangle']) >>> df # doctest: +SKIP angles degrees circle 0 360 triangle 3 180 rectangle 4 360
Add a scalar with operator version which return the same results.
>>> df + 1 # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
>>> df.add(1) # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10) # doctest: +SKIP angles degrees circle 0.0 36.0 triangle 0.3 18.0 rectangle 0.4 36.0
>>> df.rdiv(10) # doctest: +SKIP angles degrees circle inf 0.027778 triangle 3.333333 0.055556 rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2] # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub([1, 2], axis='columns') # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']), # doctest: +SKIP ... axis='index') angles degrees circle -1 359 triangle 2 179 rectangle 3 359
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]}, # doctest: +SKIP ... index=['circle', 'triangle', 'rectangle']) >>> other # doctest: +SKIP angles circle 0 triangle 3 rectangle 4
>>> df * other # doctest: +SKIP angles degrees circle 0 NaN triangle 9 NaN rectangle 16 NaN
>>> df.mul(other, fill_value=0) # doctest: +SKIP angles degrees circle 0 0.0 triangle 9 0.0 rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6], # doctest: +SKIP ... 'degrees': [360, 180, 360, 360, 540, 720]}, ... index=[['A', 'A', 'A', 'B', 'B', 'B'], ... ['circle', 'triangle', 'rectangle', ... 'square', 'pentagon', 'hexagon']]) >>> df_multindex # doctest: +SKIP angles degrees A circle 0 360 triangle 3 180 rectangle 4 360 B square 4 360 pentagon 5 540 hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0) # doctest: +SKIP angles degrees A circle NaN 1.0 triangle 1.0 1.0 rectangle 1.0 1.0 B square 0.0 0.0 pentagon 0.0 0.0 hexagon 0.0 0.0
-
rolling
(window, min_periods=None, center=False, win_type=None, axis=0)¶ Provides rolling transformations.
Parameters: window : int, str, offset
Size of the moving window. This is the number of observations used for calculating the statistic. When not using a
DatetimeIndex
, the window size must not be so large as to span more than one adjacent partition. If using an offset or offset alias like ‘5D’, the data must have aDatetimeIndex
Changed in version 0.15.0: Now accepts offsets and string offset aliases
min_periods : int, default None
Minimum number of observations in window required to have a value (otherwise result is NA).
center : boolean, default False
Set the labels at the center of the window.
win_type : string, default None
Provide a window type. The recognized window types are identical to pandas.
axis : int, default 0
Returns: a Rolling object on which to call a method to compute a statistic
-
round
(decimals=0)¶ Round a DataFrame to a variable number of decimal places.
This docstring was copied from pandas.core.frame.DataFrame.round.
Some inconsistencies with the Dask version may exist.
Parameters: decimals : int, dict, Series
Number of decimal places to round each column to. If an int is given, round each column to the same number of places. Otherwise dict and Series round to variable numbers of places. Column names should be in the keys if decimals is a dict-like, or in the index if decimals is a Series. Any columns not included in decimals will be left as is. Elements of decimals which are not columns of the input will be ignored.
*args
Additional keywords have no effect but might be accepted for compatibility with numpy.
**kwargs
Additional keywords have no effect but might be accepted for compatibility with numpy.
Returns: DataFrame
A DataFrame with the affected columns rounded to the specified number of decimal places.
See also
numpy.around
- Round a numpy array to the given number of decimals.
Series.round
- Round a Series to the given number of decimals.
Examples
>>> df = pd.DataFrame([(.21, .32), (.01, .67), (.66, .03), (.21, .18)], # doctest: +SKIP ... columns=['dogs', 'cats']) >>> df # doctest: +SKIP dogs cats 0 0.21 0.32 1 0.01 0.67 2 0.66 0.03 3 0.21 0.18
By providing an integer each column is rounded to the same number of decimal places
>>> df.round(1) # doctest: +SKIP dogs cats 0 0.2 0.3 1 0.0 0.7 2 0.7 0.0 3 0.2 0.2
With a dict, the number of places for specific columns can be specified with the column names as key and the number of decimal places as value
>>> df.round({'dogs': 1, 'cats': 0}) # doctest: +SKIP dogs cats 0 0.2 0.0 1 0.0 1.0 2 0.7 0.0 3 0.2 0.0
Using a Series, the number of places for specific columns can be specified with the column names as index and the number of decimal places as value
>>> decimals = pd.Series([0, 1], index=['cats', 'dogs']) # doctest: +SKIP >>> df.round(decimals) # doctest: +SKIP dogs cats 0 0.2 0.0 1 0.0 1.0 2 0.7 0.0 3 0.2 0.0
-
rpow
(other, axis='columns', level=None, fill_value=None)¶ Get Exponential power of dataframe and other, element-wise (binary operator rpow).
This docstring was copied from pandas.core.frame.DataFrame.rpow.
Some inconsistencies with the Dask version may exist.
Equivalent to
other ** dataframe
, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, pow.Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
fill_value : float or None, default None
Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.
Returns: DataFrame
Result of the arithmetic operation.
See also
DataFrame.add
- Add DataFrames.
DataFrame.sub
- Subtract DataFrames.
DataFrame.mul
- Multiply DataFrames.
DataFrame.div
- Divide DataFrames (float division).
DataFrame.truediv
- Divide DataFrames (float division).
DataFrame.floordiv
- Divide DataFrames (integer division).
DataFrame.mod
- Calculate modulo (remainder after division).
DataFrame.pow
- Calculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4], # doctest: +SKIP ... 'degrees': [360, 180, 360]}, ... index=['circle', 'triangle', 'rectangle']) >>> df # doctest: +SKIP angles degrees circle 0 360 triangle 3 180 rectangle 4 360
Add a scalar with operator version which return the same results.
>>> df + 1 # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
>>> df.add(1) # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10) # doctest: +SKIP angles degrees circle 0.0 36.0 triangle 0.3 18.0 rectangle 0.4 36.0
>>> df.rdiv(10) # doctest: +SKIP angles degrees circle inf 0.027778 triangle 3.333333 0.055556 rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2] # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub([1, 2], axis='columns') # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']), # doctest: +SKIP ... axis='index') angles degrees circle -1 359 triangle 2 179 rectangle 3 359
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]}, # doctest: +SKIP ... index=['circle', 'triangle', 'rectangle']) >>> other # doctest: +SKIP angles circle 0 triangle 3 rectangle 4
>>> df * other # doctest: +SKIP angles degrees circle 0 NaN triangle 9 NaN rectangle 16 NaN
>>> df.mul(other, fill_value=0) # doctest: +SKIP angles degrees circle 0 0.0 triangle 9 0.0 rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6], # doctest: +SKIP ... 'degrees': [360, 180, 360, 360, 540, 720]}, ... index=[['A', 'A', 'A', 'B', 'B', 'B'], ... ['circle', 'triangle', 'rectangle', ... 'square', 'pentagon', 'hexagon']]) >>> df_multindex # doctest: +SKIP angles degrees A circle 0 360 triangle 3 180 rectangle 4 360 B square 4 360 pentagon 5 540 hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0) # doctest: +SKIP angles degrees A circle NaN 1.0 triangle 1.0 1.0 rectangle 1.0 1.0 B square 0.0 0.0 pentagon 0.0 0.0 hexagon 0.0 0.0
-
rsub
(other, axis='columns', level=None, fill_value=None)¶ Get Subtraction of dataframe and other, element-wise (binary operator rsub).
This docstring was copied from pandas.core.frame.DataFrame.rsub.
Some inconsistencies with the Dask version may exist.
Equivalent to
other - dataframe
, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, sub.Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
fill_value : float or None, default None
Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.
Returns: DataFrame
Result of the arithmetic operation.
See also
DataFrame.add
- Add DataFrames.
DataFrame.sub
- Subtract DataFrames.
DataFrame.mul
- Multiply DataFrames.
DataFrame.div
- Divide DataFrames (float division).
DataFrame.truediv
- Divide DataFrames (float division).
DataFrame.floordiv
- Divide DataFrames (integer division).
DataFrame.mod
- Calculate modulo (remainder after division).
DataFrame.pow
- Calculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4], # doctest: +SKIP ... 'degrees': [360, 180, 360]}, ... index=['circle', 'triangle', 'rectangle']) >>> df # doctest: +SKIP angles degrees circle 0 360 triangle 3 180 rectangle 4 360
Add a scalar with operator version which return the same results.
>>> df + 1 # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
>>> df.add(1) # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10) # doctest: +SKIP angles degrees circle 0.0 36.0 triangle 0.3 18.0 rectangle 0.4 36.0
>>> df.rdiv(10) # doctest: +SKIP angles degrees circle inf 0.027778 triangle 3.333333 0.055556 rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2] # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub([1, 2], axis='columns') # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']), # doctest: +SKIP ... axis='index') angles degrees circle -1 359 triangle 2 179 rectangle 3 359
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]}, # doctest: +SKIP ... index=['circle', 'triangle', 'rectangle']) >>> other # doctest: +SKIP angles circle 0 triangle 3 rectangle 4
>>> df * other # doctest: +SKIP angles degrees circle 0 NaN triangle 9 NaN rectangle 16 NaN
>>> df.mul(other, fill_value=0) # doctest: +SKIP angles degrees circle 0 0.0 triangle 9 0.0 rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6], # doctest: +SKIP ... 'degrees': [360, 180, 360, 360, 540, 720]}, ... index=[['A', 'A', 'A', 'B', 'B', 'B'], ... ['circle', 'triangle', 'rectangle', ... 'square', 'pentagon', 'hexagon']]) >>> df_multindex # doctest: +SKIP angles degrees A circle 0 360 triangle 3 180 rectangle 4 360 B square 4 360 pentagon 5 540 hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0) # doctest: +SKIP angles degrees A circle NaN 1.0 triangle 1.0 1.0 rectangle 1.0 1.0 B square 0.0 0.0 pentagon 0.0 0.0 hexagon 0.0 0.0
-
rtruediv
(other, axis='columns', level=None, fill_value=None)¶ Get Floating division of dataframe and other, element-wise (binary operator rtruediv).
This docstring was copied from pandas.core.frame.DataFrame.rtruediv.
Some inconsistencies with the Dask version may exist.
Equivalent to
other / dataframe
, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, truediv.Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
fill_value : float or None, default None
Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.
Returns: DataFrame
Result of the arithmetic operation.
See also
DataFrame.add
- Add DataFrames.
DataFrame.sub
- Subtract DataFrames.
DataFrame.mul
- Multiply DataFrames.
DataFrame.div
- Divide DataFrames (float division).
DataFrame.truediv
- Divide DataFrames (float division).
DataFrame.floordiv
- Divide DataFrames (integer division).
DataFrame.mod
- Calculate modulo (remainder after division).
DataFrame.pow
- Calculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4], # doctest: +SKIP ... 'degrees': [360, 180, 360]}, ... index=['circle', 'triangle', 'rectangle']) >>> df # doctest: +SKIP angles degrees circle 0 360 triangle 3 180 rectangle 4 360
Add a scalar with operator version which return the same results.
>>> df + 1 # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
>>> df.add(1) # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10) # doctest: +SKIP angles degrees circle 0.0 36.0 triangle 0.3 18.0 rectangle 0.4 36.0
>>> df.rdiv(10) # doctest: +SKIP angles degrees circle inf 0.027778 triangle 3.333333 0.055556 rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2] # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub([1, 2], axis='columns') # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']), # doctest: +SKIP ... axis='index') angles degrees circle -1 359 triangle 2 179 rectangle 3 359
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]}, # doctest: +SKIP ... index=['circle', 'triangle', 'rectangle']) >>> other # doctest: +SKIP angles circle 0 triangle 3 rectangle 4
>>> df * other # doctest: +SKIP angles degrees circle 0 NaN triangle 9 NaN rectangle 16 NaN
>>> df.mul(other, fill_value=0) # doctest: +SKIP angles degrees circle 0 0.0 triangle 9 0.0 rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6], # doctest: +SKIP ... 'degrees': [360, 180, 360, 360, 540, 720]}, ... index=[['A', 'A', 'A', 'B', 'B', 'B'], ... ['circle', 'triangle', 'rectangle', ... 'square', 'pentagon', 'hexagon']]) >>> df_multindex # doctest: +SKIP angles degrees A circle 0 360 triangle 3 180 rectangle 4 360 B square 4 360 pentagon 5 540 hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0) # doctest: +SKIP angles degrees A circle NaN 1.0 triangle 1.0 1.0 rectangle 1.0 1.0 B square 0.0 0.0 pentagon 0.0 0.0 hexagon 0.0 0.0
-
sample
(n=None, frac=None, replace=False, random_state=None)¶ Random sample of items
Parameters: n : int, optional
Number of items to return is not supported by dask. Use frac instead.
frac : float, optional
Fraction of axis items to return.
replace : boolean, optional
Sample with or without replacement. Default = False.
random_state : int or
np.random.RandomState
If int we create a new RandomState with this as the seed Otherwise we draw from the passed RandomState
-
select_dtypes
(include=None, exclude=None)¶ Return a subset of the DataFrame’s columns based on the column dtypes.
This docstring was copied from pandas.core.frame.DataFrame.select_dtypes.
Some inconsistencies with the Dask version may exist.
Parameters: include, exclude : scalar or list-like
A selection of dtypes or strings to be included/excluded. At least one of these parameters must be supplied.
Returns: DataFrame
The subset of the frame including the dtypes in
include
and excluding the dtypes inexclude
.Raises: ValueError
- If both of
include
andexclude
are empty - If
include
andexclude
have overlapping elements - If any kind of string dtype is passed in.
Notes
- To select all numeric types, use
np.number
or'number'
- To select strings you must use the
object
dtype, but note that this will return all object dtype columns - See the numpy dtype hierarchy
- To select datetimes, use
np.datetime64
,'datetime'
or'datetime64'
- To select timedeltas, use
np.timedelta64
,'timedelta'
or'timedelta64'
- To select Pandas categorical dtypes, use
'category'
- To select Pandas datetimetz dtypes, use
'datetimetz'
(new in 0.20.0) or'datetime64[ns, tz]'
Examples
>>> df = pd.DataFrame({'a': [1, 2] * 3, # doctest: +SKIP ... 'b': [True, False] * 3, ... 'c': [1.0, 2.0] * 3}) >>> df # doctest: +SKIP a b c 0 1 True 1.0 1 2 False 2.0 2 1 True 1.0 3 2 False 2.0 4 1 True 1.0 5 2 False 2.0
>>> df.select_dtypes(include='bool') # doctest: +SKIP b 0 True 1 False 2 True 3 False 4 True 5 False
>>> df.select_dtypes(include=['float64']) # doctest: +SKIP c 0 1.0 1 2.0 2 1.0 3 2.0 4 1.0 5 2.0
>>> df.select_dtypes(exclude=['int']) # doctest: +SKIP b c 0 True 1.0 1 False 2.0 2 True 1.0 3 False 2.0 4 True 1.0 5 False 2.0
- If both of
-
sem
(axis=None, skipna=None, ddof=1, split_every=False)¶ Return unbiased standard error of the mean over requested axis.
This docstring was copied from pandas.core.frame.DataFrame.sem.
Some inconsistencies with the Dask version may exist.
Normalized by N-1 by default. This can be changed using the ddof argument
Parameters: axis : {index (0), columns (1)}
skipna : bool, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA
level : int or level name, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series
ddof : int, default 1
Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements.
numeric_only : bool, default None (Not supported in Dask)
Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.
Returns: Series or DataFrame (if level specified)
-
set_index
(other, drop=True, sorted=False, npartitions=None, divisions=None, inplace=False, **kwargs)¶ Set the DataFrame index (row labels) using an existing column.
This realigns the dataset to be sorted by a new column. This can have a significant impact on performance, because joins, groupbys, lookups, etc. are all much faster on that column. However, this performance increase comes with a cost, sorting a parallel dataset requires expensive shuffles. Often we
set_index
once directly after data ingest and filtering and then perform many cheap computations off of the sorted dataset.This function operates exactly like
pandas.set_index
except with different performance costs (dask dataframeset_index
is much more expensive). Under normal operation this function does an initial pass over the index column to compute approximate qunatiles to serve as future divisions. It then passes over the data a second time, splitting up each input partition into several pieces and sharing those pieces to all of the output partitions now in sorted order.In some cases we can alleviate those costs, for example if your dataset is sorted already then we can avoid making many small pieces or if you know good values to split the new index column then we can avoid the initial pass over the data. For example if your new index is a datetime index and your data is already sorted by day then this entire operation can be done for free. You can control these options with the following parameters.
Parameters: df: Dask DataFrame
index: string or Dask Series
npartitions: int, None, or ‘auto’
The ideal number of output partitions. If None use the same as the input. If ‘auto’ then decide by memory use.
shuffle: string, optional
Either
'disk'
for single-node operation or'tasks'
for distributed operation. Will be inferred by your current scheduler.sorted: bool, optional
If the index column is already sorted in increasing order. Defaults to False
divisions: list, optional
Known values on which to separate index values of the partitions. See https://docs.dask.org/en/latest/dataframe-design.html#partitions Defaults to computing this with a single pass over the data. Note that if
sorted=True
, specified divisions are assumed to match the existing partitions in the data. Ifsorted=False
, you should leave divisions empty and callrepartition
afterset_index
.inplace : bool, optional
Modifying the DataFrame in place is not supported by Dask. Defaults to False.
compute: bool
Whether or not to trigger an immediate computation. Defaults to False. Note, that even if you set
compute=False
, an immediate computation will still be triggered ifdivisions
isNone
.Examples
>>> df2 = df.set_index('x') # doctest: +SKIP >>> df2 = df.set_index(d.x) # doctest: +SKIP >>> df2 = df.set_index(d.timestamp, sorted=True) # doctest: +SKIP
A common case is when we have a datetime column that we know to be sorted and is cleanly divided by day. We can set this index for free by specifying both that the column is pre-sorted and the particular divisions along which is is separated
>>> import pandas as pd >>> divisions = pd.date_range('2000', '2010', freq='1D') >>> df2 = df.set_index('timestamp', sorted=True, divisions=divisions) # doctest: +SKIP
-
shape
¶ Return a tuple representing the dimensionality of the DataFrame.
The number of rows is a Delayed result. The number of columns is a concrete integer.
Examples
>>> df.size # doctest: +SKIP (Delayed('int-07f06075-5ecc-4d77-817e-63c69a9188a8'), 2)
-
shift
(periods=1, freq=None, axis=0)¶ Shift index by desired number of periods with an optional time freq.
This docstring was copied from pandas.core.frame.DataFrame.shift.
Some inconsistencies with the Dask version may exist.
When freq is not passed, shift the index without realigning the data. If freq is passed (in this case, the index must be date or datetime, or it will raise a NotImplementedError), the index will be increased using the periods and the freq.
Parameters: periods : int
Number of periods to shift. Can be positive or negative.
freq : DateOffset, tseries.offsets, timedelta, or str, optional
Offset to use from the tseries module or time rule (e.g. ‘EOM’). If freq is specified then the index values are shifted but the data is not realigned. That is, use freq if you would like to extend the index when shifting and preserve the original data.
axis : {0 or ‘index’, 1 or ‘columns’, None}, default None
Shift direction.
fill_value : object, optional (Not supported in Dask)
The scalar value to use for newly introduced missing values. the default depends on the dtype of self. For numeric data,
np.nan
is used. For datetime, timedelta, or period data, etc.NaT
is used. For extension dtypes,self.dtype.na_value
is used.Changed in version 0.24.0.
Returns: DataFrame
Copy of input object, shifted.
See also
Index.shift
- Shift values of Index.
DatetimeIndex.shift
- Shift values of DatetimeIndex.
PeriodIndex.shift
- Shift values of PeriodIndex.
tshift
- Shift the time index, using the index’s frequency if available.
Examples
>>> df = pd.DataFrame({'Col1': [10, 20, 15, 30, 45], # doctest: +SKIP ... 'Col2': [13, 23, 18, 33, 48], ... 'Col3': [17, 27, 22, 37, 52]})
>>> df.shift(periods=3) # doctest: +SKIP Col1 Col2 Col3 0 NaN NaN NaN 1 NaN NaN NaN 2 NaN NaN NaN 3 10.0 13.0 17.0 4 20.0 23.0 27.0
>>> df.shift(periods=1, axis='columns') # doctest: +SKIP Col1 Col2 Col3 0 NaN 10.0 13.0 1 NaN 20.0 23.0 2 NaN 15.0 18.0 3 NaN 30.0 33.0 4 NaN 45.0 48.0
>>> df.shift(periods=3, fill_value=0) # doctest: +SKIP Col1 Col2 Col3 0 0 0 0 1 0 0 0 2 0 0 0 3 10 13 17 4 20 23 27
-
size
¶ Size of the Series or DataFrame as a Delayed object.
Examples
>>> series.size # doctest: +SKIP dd.Scalar<size-ag..., dtype=int64>
-
squeeze
(axis=None)¶ Squeeze 1 dimensional axis objects into scalars.
This docstring was copied from pandas.core.frame.DataFrame.squeeze.
Some inconsistencies with the Dask version may exist.
Series or DataFrames with a single element are squeezed to a scalar. DataFrames with a single column or a single row are squeezed to a Series. Otherwise the object is unchanged.
This method is most useful when you don’t know if your object is a Series or DataFrame, but you do know it has just a single column. In that case you can safely call squeeze to ensure you have a Series.
Parameters: axis : {0 or ‘index’, 1 or ‘columns’, None}, default None
A specific axis to squeeze. By default, all length-1 axes are squeezed.
New in version 0.20.0.
Returns: DataFrame, Series, or scalar
The projection after squeezing axis or all the axes.
See also
Series.iloc
- Integer-location based indexing for selecting scalars.
DataFrame.iloc
- Integer-location based indexing for selecting Series.
Series.to_frame
- Inverse of DataFrame.squeeze for a single-column DataFrame.
Examples
>>> primes = pd.Series([2, 3, 5, 7]) # doctest: +SKIP
Slicing might produce a Series with a single value:
>>> even_primes = primes[primes % 2 == 0] # doctest: +SKIP >>> even_primes # doctest: +SKIP 0 2 dtype: int64
>>> even_primes.squeeze() # doctest: +SKIP 2
Squeezing objects with more than one value in every axis does nothing:
>>> odd_primes = primes[primes % 2 == 1] # doctest: +SKIP >>> odd_primes # doctest: +SKIP 1 3 2 5 3 7 dtype: int64
>>> odd_primes.squeeze() # doctest: +SKIP 1 3 2 5 3 7 dtype: int64
Squeezing is even more effective when used with DataFrames.
>>> df = pd.DataFrame([[1, 2], [3, 4]], columns=['a', 'b']) # doctest: +SKIP >>> df # doctest: +SKIP a b 0 1 2 1 3 4
Slicing a single column will produce a DataFrame with the columns having only one value:
>>> df_a = df[['a']] # doctest: +SKIP >>> df_a # doctest: +SKIP a 0 1 1 3
So the columns can be squeezed down, resulting in a Series:
>>> df_a.squeeze('columns') # doctest: +SKIP 0 1 1 3 Name: a, dtype: int64
Slicing a single row from a single column will produce a single scalar DataFrame:
>>> df_0a = df.loc[df.index < 1, ['a']] # doctest: +SKIP >>> df_0a # doctest: +SKIP a 0 1
Squeezing the rows produces a single scalar Series:
>>> df_0a.squeeze('rows') # doctest: +SKIP a 1 Name: 0, dtype: int64
Squeezing all axes will project directly into a scalar:
>>> df_0a.squeeze() # doctest: +SKIP 1
-
std
(axis=None, skipna=True, ddof=1, split_every=False, dtype=None, out=None)¶ Return sample standard deviation over requested axis.
This docstring was copied from pandas.core.frame.DataFrame.std.
Some inconsistencies with the Dask version may exist.
Normalized by N-1 by default. This can be changed using the ddof argument
Parameters: axis : {index (0), columns (1)}
skipna : bool, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA
level : int or level name, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series
ddof : int, default 1
Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements.
numeric_only : bool, default None (Not supported in Dask)
Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.
Returns: Series or DataFrame (if level specified)
-
sub
(other, axis='columns', level=None, fill_value=None)¶ Get Subtraction of dataframe and other, element-wise (binary operator sub).
This docstring was copied from pandas.core.frame.DataFrame.sub.
Some inconsistencies with the Dask version may exist.
Equivalent to
dataframe - other
, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, rsub.Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
fill_value : float or None, default None
Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.
Returns: DataFrame
Result of the arithmetic operation.
See also
DataFrame.add
- Add DataFrames.
DataFrame.sub
- Subtract DataFrames.
DataFrame.mul
- Multiply DataFrames.
DataFrame.div
- Divide DataFrames (float division).
DataFrame.truediv
- Divide DataFrames (float division).
DataFrame.floordiv
- Divide DataFrames (integer division).
DataFrame.mod
- Calculate modulo (remainder after division).
DataFrame.pow
- Calculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4], # doctest: +SKIP ... 'degrees': [360, 180, 360]}, ... index=['circle', 'triangle', 'rectangle']) >>> df # doctest: +SKIP angles degrees circle 0 360 triangle 3 180 rectangle 4 360
Add a scalar with operator version which return the same results.
>>> df + 1 # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
>>> df.add(1) # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10) # doctest: +SKIP angles degrees circle 0.0 36.0 triangle 0.3 18.0 rectangle 0.4 36.0
>>> df.rdiv(10) # doctest: +SKIP angles degrees circle inf 0.027778 triangle 3.333333 0.055556 rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2] # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub([1, 2], axis='columns') # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']), # doctest: +SKIP ... axis='index') angles degrees circle -1 359 triangle 2 179 rectangle 3 359
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]}, # doctest: +SKIP ... index=['circle', 'triangle', 'rectangle']) >>> other # doctest: +SKIP angles circle 0 triangle 3 rectangle 4
>>> df * other # doctest: +SKIP angles degrees circle 0 NaN triangle 9 NaN rectangle 16 NaN
>>> df.mul(other, fill_value=0) # doctest: +SKIP angles degrees circle 0 0.0 triangle 9 0.0 rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6], # doctest: +SKIP ... 'degrees': [360, 180, 360, 360, 540, 720]}, ... index=[['A', 'A', 'A', 'B', 'B', 'B'], ... ['circle', 'triangle', 'rectangle', ... 'square', 'pentagon', 'hexagon']]) >>> df_multindex # doctest: +SKIP angles degrees A circle 0 360 triangle 3 180 rectangle 4 360 B square 4 360 pentagon 5 540 hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0) # doctest: +SKIP angles degrees A circle NaN 1.0 triangle 1.0 1.0 rectangle 1.0 1.0 B square 0.0 0.0 pentagon 0.0 0.0 hexagon 0.0 0.0
-
sum
(axis=None, skipna=True, split_every=False, dtype=None, out=None, min_count=None)¶ Return the sum of the values for the requested axis.
This docstring was copied from pandas.core.frame.DataFrame.sum.
Some inconsistencies with the Dask version may exist.
This is equivalent to the method
numpy.sum
.Parameters: axis : {index (0), columns (1)}
Axis for the function to be applied on.
skipna : bool, default True
Exclude NA/null values when computing the result.
level : int or level name, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.
numeric_only : bool, default None (Not supported in Dask)
Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.
min_count : int, default 0
The required number of valid values to perform the operation. If fewer than
min_count
non-NA values are present the result will be NA.New in version 0.22.0: Added with the default being 0. This means the sum of an all-NA or empty Series is 0, and the product of an all-NA or empty Series is 1.
**kwargs
Additional keyword arguments to be passed to the function.
Returns: Series or DataFrame (if level specified)
See also
Series.sum
- Return the sum.
Series.min
- Return the minimum.
Series.max
- Return the maximum.
Series.idxmin
- Return the index of the minimum.
Series.idxmax
- Return the index of the maximum.
DataFrame.sum
- Return the sum over the requested axis.
DataFrame.min
- Return the minimum over the requested axis.
DataFrame.max
- Return the maximum over the requested axis.
DataFrame.idxmin
- Return the index of the minimum over the requested axis.
DataFrame.idxmax
- Return the index of the maximum over the requested axis.
Examples
>>> idx = pd.MultiIndex.from_arrays([ # doctest: +SKIP ... ['warm', 'warm', 'cold', 'cold'], ... ['dog', 'falcon', 'fish', 'spider']], ... names=['blooded', 'animal']) >>> s = pd.Series([4, 2, 0, 8], name='legs', index=idx) # doctest: +SKIP >>> s # doctest: +SKIP blooded animal warm dog 4 falcon 2 cold fish 0 spider 8 Name: legs, dtype: int64
>>> s.sum() # doctest: +SKIP 14
Sum using level names, as well as indices.
>>> s.sum(level='blooded') # doctest: +SKIP blooded warm 6 cold 8 Name: legs, dtype: int64
>>> s.sum(level=0) # doctest: +SKIP blooded warm 6 cold 8 Name: legs, dtype: int64
By default, the sum of an empty or all-NA Series is
0
.>>> pd.Series([]).sum() # min_count=0 is the default # doctest: +SKIP 0.0
This can be controlled with the
min_count
parameter. For example, if you’d like the sum of an empty series to be NaN, passmin_count=1
.>>> pd.Series([]).sum(min_count=1) # doctest: +SKIP nan
Thanks to the
skipna
parameter,min_count
handles all-NA and empty series identically.>>> pd.Series([np.nan]).sum() # doctest: +SKIP 0.0
>>> pd.Series([np.nan]).sum(min_count=1) # doctest: +SKIP nan
-
tail
(n=5, compute=True)¶ Last n rows of the dataset
Caveat, the only checks the last n rows of the last partition.
-
to_bag
(index=False)¶ Create Dask Bag from a Dask DataFrame
Parameters: index : bool, optional
If True, the elements are tuples of
(index, value)
, otherwise they’re just thevalue
. Default is False.Examples
>>> bag = df.to_bag() # doctest: +SKIP
-
to_csv
(filename, **kwargs)¶ Store Dask DataFrame to CSV files
One filename per partition will be created. You can specify the filenames in a variety of ways.
Use a globstring:
>>> df.to_csv('/path/to/data/export-*.csv')
The * will be replaced by the increasing sequence 0, 1, 2, …
/path/to/data/export-0.csv /path/to/data/export-1.csv
Use a globstring and a
name_function=
keyword argument. The name_function function should expect an integer and produce a string. Strings produced by name_function must preserve the order of their respective partition indices.>>> from datetime import date, timedelta >>> def name(i): ... return str(date(2015, 1, 1) + i * timedelta(days=1))
>>> name(0) '2015-01-01' >>> name(15) '2015-01-16'
>>> df.to_csv('/path/to/data/export-*.csv', name_function=name) # doctest: +SKIP
/path/to/data/export-2015-01-01.csv /path/to/data/export-2015-01-02.csv ...
You can also provide an explicit list of paths:
>>> paths = ['/path/to/data/alice.csv', '/path/to/data/bob.csv', ...] >>> df.to_csv(paths)
Parameters: filename : string
Path glob indicating the naming scheme for the output files
name_function : callable, default None
Function accepting an integer (partition index) and producing a string to replace the asterisk in the given filename globstring. Should preserve the lexicographic order of partitions. Not supported when single_file is True.
single_file : bool, default False
Whether to save everything into a single CSV file. Under the single file mode, each partition is appended at the end of the specified CSV file. Note that not all filesystems support the append mode and thus the single file mode, especially on cloud storage systems such as S3 or GCS. A warning will be issued when writing to a file that is not backed by a local filesystem.
compression : string or None
String like ‘gzip’ or ‘xz’. Must support efficient random access. Filenames with extensions corresponding to known compression algorithms (gz, bz2) will be compressed accordingly automatically
sep : character, default ‘,’
Field delimiter for the output file
na_rep : string, default ‘’
Missing data representation
float_format : string, default None
Format string for floating point numbers
columns : sequence, optional
Columns to write
header : boolean or list of string, default True
Write out column names. If a list of string is given it is assumed to be aliases for the column names
header_first_partition_only : boolean, default None
If set to True, only write the header row in the first output file. By default, headers are written to all partitions under the multiple file mode (single_file is False) and written only once under the single file mode (single_file is True). It must not be False under the single file mode.
index : boolean, default True
Write row names (index)
index_label : string or sequence, or False, default None
Column label for index column(s) if desired. If None is given, and header and index are True, then the index names are used. A sequence should be given if the DataFrame uses MultiIndex. If False do not print fields for index names. Use index_label=False for easier importing in R
nanRep : None
deprecated, use na_rep
mode : str
Python write mode, default ‘w’
encoding : string, optional
A string representing the encoding to use in the output file, defaults to ‘ascii’ on Python 2 and ‘utf-8’ on Python 3.
compression : string, optional
a string representing the compression to use in the output file, allowed values are ‘gzip’, ‘bz2’, ‘xz’, only used when the first argument is a filename
line_terminator : string, default ‘n’
The newline character or character sequence to use in the output file
quoting : optional constant from csv module
defaults to csv.QUOTE_MINIMAL
quotechar : string (length 1), default ‘”’
character used to quote fields
doublequote : boolean, default True
Control quoting of quotechar inside a field
escapechar : string (length 1), default None
character used to escape sep and quotechar when appropriate
chunksize : int or None
rows to write at a time
tupleize_cols : boolean, default False
write multi_index columns as a list of tuples (if True) or new (expanded format) if False)
date_format : string, default None
Format string for datetime objects
decimal: string, default ‘.’
Character recognized as decimal separator. E.g. use ‘,’ for European data
storage_options: dict
Parameters passed on to the backend filesystem class.
Returns: The names of the file written if they were computed right away
If not, the delayed tasks associated to the writing of the files
Raises: ValueError
If header_first_partition_only is set to False or name_function is specified when single_file is True.
-
to_dask_array
(lengths=None)¶ Convert a dask DataFrame to a dask array.
Parameters: lengths : bool or Sequence of ints, optional
How to determine the chunks sizes for the output array. By default, the output array will have unknown chunk lengths along the first axis, which can cause some later operations to fail.
- True : immediately compute the length of each partition
- Sequence : a sequence of integers to use for the chunk sizes on the first axis. These values are not validated for correctness, beyond ensuring that the number of items matches the number of partitions.
-
to_delayed
(optimize_graph=True)¶ Convert into a list of
dask.delayed
objects, one per partition.Parameters: optimize_graph : bool, optional
If True [default], the graph is optimized before converting into
dask.delayed
objects.See also
Examples
>>> partitions = df.to_delayed() # doctest: +SKIP
-
to_hdf
(path_or_buf, key, mode='a', append=False, **kwargs)¶ Store Dask Dataframe to Hierarchical Data Format (HDF) files
This is a parallel version of the Pandas function of the same name. Please see the Pandas docstring for more detailed information about shared keyword arguments.
This function differs from the Pandas version by saving the many partitions of a Dask DataFrame in parallel, either to many files, or to many datasets within the same file. You may specify this parallelism with an asterix
*
within the filename or datapath, and an optionalname_function
. The asterix will be replaced with an increasing sequence of integers starting from0
or with the result of callingname_function
on each of those integers.This function only supports the Pandas
'table'
format, not the more specialized'fixed'
format.Parameters: path : string, pathlib.Path
Path to a target filename. Supports strings,
pathlib.Path
, or any object implementing the__fspath__
protocol. May contain a*
to denote many filenames.key : string
Datapath within the files. May contain a
*
to denote many locationsname_function : function
A function to convert the
*
in the above options to a string. Should take in a number from 0 to the number of partitions and return a string. (see examples below)compute : bool
Whether or not to execute immediately. If False then this returns a
dask.Delayed
value.lock : Lock, optional
Lock to use to prevent concurrency issues. By default a
threading.Lock
,multiprocessing.Lock
orSerializableLock
will be used depending on your scheduler if a lock is required. See dask.utils.get_scheduler_lock for more information about lock selection.scheduler : string
The scheduler to use, like “threads” or “processes”
**other:
See pandas.to_hdf for more information
Returns: filenames : list
Returned if
compute
is True. List of file names that each partition is saved to.delayed : dask.Delayed
Returned if
compute
is False. Delayed object to executeto_hdf
when computed.See also
Examples
Save Data to a single file
>>> df.to_hdf('output.hdf', '/data') # doctest: +SKIP
Save data to multiple datapaths within the same file:
>>> df.to_hdf('output.hdf', '/data-*') # doctest: +SKIP
Save data to multiple files:
>>> df.to_hdf('output-*.hdf', '/data') # doctest: +SKIP
Save data to multiple files, using the multiprocessing scheduler:
>>> df.to_hdf('output-*.hdf', '/data', scheduler='processes') # doctest: +SKIP
Specify custom naming scheme. This writes files as ‘2000-01-01.hdf’, ‘2000-01-02.hdf’, ‘2000-01-03.hdf’, etc..
>>> from datetime import date, timedelta >>> base = date(year=2000, month=1, day=1) >>> def name_function(i): ... ''' Convert integer 0 to n to a string ''' ... return base + timedelta(days=i)
>>> df.to_hdf('*.hdf', '/data', name_function=name_function) # doctest: +SKIP
-
to_html
(max_rows=5)¶ Render a DataFrame as an HTML table.
Parameters: buf : StringIO-like, optional (Not supported in Dask)
Buffer to write to.
columns : sequence, optional, default None (Not supported in Dask)
The subset of columns to write. Writes all columns by default.
col_space : str or int, optional (Not supported in Dask)
The minimum width of each column in CSS length units. An int is assumed to be px units.
This docstring was copied from pandas.core.frame.DataFrame.to_html.
Some inconsistencies with the Dask version may exist.
New in version 0.25.0: Ability to use str.
header : bool, optional (Not supported in Dask)
Whether to print column labels, default True.
index : bool, optional, default True (Not supported in Dask)
Whether to print index (row) labels.
na_rep : str, optional, default ‘NaN’ (Not supported in Dask)
String representation of NAN to use.
formatters : list or dict of one-param. functions, optional (Not supported in Dask)
Formatter functions to apply to columns’ elements by position or name. The result of each function must be a unicode string. List must be of length equal to the number of columns.
float_format : one-parameter function, optional, default None (Not supported in Dask)
Formatter function to apply to columns’ elements if they are floats. The result of this function must be a unicode string.
sparsify : bool, optional, default True (Not supported in Dask)
Set to False for a DataFrame with a hierarchical index to print every multiindex key at each row.
index_names : bool, optional, default True (Not supported in Dask)
Prints the names of the indexes.
justify : str, default None (Not supported in Dask)
How to justify the column labels. If None uses the option from the print configuration (controlled by set_option), ‘right’ out of the box. Valid values are
- left
- right
- center
- justify
- justify-all
- start
- end
- inherit
- match-parent
- initial
- unset.
max_rows : int, optional
Maximum number of rows to display in the console.
min_rows : int, optional
The number of rows to display in the console in a truncated repr (when number of rows is above max_rows).
max_cols : int, optional (Not supported in Dask)
Maximum number of columns to display in the console.
show_dimensions : bool, default False (Not supported in Dask)
Display DataFrame dimensions (number of rows by number of columns).
decimal : str, default ‘.’ (Not supported in Dask)
Character recognized as decimal separator, e.g. ‘,’ in Europe.
New in version 0.18.0.
bold_rows : bool, default True (Not supported in Dask)
Make the row labels bold in the output.
classes : str or list or tuple, default None (Not supported in Dask)
CSS class(es) to apply to the resulting html table.
escape : bool, default True (Not supported in Dask)
Convert the characters <, >, and & to HTML-safe sequences.
notebook : {True, False}, default False (Not supported in Dask)
Whether the generated HTML is for IPython Notebook.
border : int (Not supported in Dask)
A
border=border
attribute is included in the opening <table> tag. Defaultpd.options.display.html.border
.New in version 0.19.0.
table_id : str, optional (Not supported in Dask)
A css id is included in the opening <table> tag if specified.
New in version 0.23.0.
render_links : bool, default False (Not supported in Dask)
Convert URLs to HTML links.
New in version 0.24.0.
Returns: str (or unicode, depending on data and options)
String representation of the dataframe.
See also
to_string
- Convert DataFrame to a string.
-
to_json
(filename, *args, **kwargs)¶ See dd.to_json docstring for more information
-
to_parquet
(path, *args, **kwargs)¶ Store Dask.dataframe to Parquet files
Parameters: df : dask.dataframe.DataFrame
path : string or pathlib.Path
Destination directory for data. Prepend with protocol like
s3://
orhdfs://
for remote data.engine : {‘auto’, ‘fastparquet’, ‘pyarrow’}, default ‘auto’
Parquet library to use. If only one library is installed, it will use that one; if both, it will use ‘fastparquet’.
compression : string or dict, optional
Either a string like
"snappy"
or a dictionary mapping column names to compressors like{"name": "gzip", "values": "snappy"}
. The default is"default"
, which uses the default compression for whichever engine is selected.write_index : boolean, optional
Whether or not to write the index. Defaults to True.
append : bool, optional
If False (default), construct data-set from scratch. If True, add new row-group(s) to an existing data-set. In the latter case, the data-set must exist, and the schema must match the input data.
ignore_divisions : bool, optional
If False (default) raises error when previous divisions overlap with the new appended divisions. Ignored if append=False.
partition_on : list, optional
Construct directory-based partitioning by splitting on these fields’ values. Each dask partition will result in one or more datafiles, there will be no global groupby.
storage_options : dict, optional
Key/value pairs to be passed on to the file-system backend, if any.
write_metadata_file : bool, optional
Whether to write the special “_metadata” file.
compute : bool, optional
If True (default) then the result is computed immediately. If False then a
dask.delayed
object is returned for future computation.**kwargs :
Extra options to be passed on to the specific backend.
See also
read_parquet
- Read parquet data to dask.dataframe
Notes
Each partition will be written to a separate file.
Examples
>>> df = dd.read_csv(...) # doctest: +SKIP >>> dd.to_parquet(df, '/path/to/output/',...) # doctest: +SKIP
-
to_records
(index=False, lengths=None)¶ Create Dask Array from a Dask Dataframe
Warning: This creates a dask.array without precise shape information. Operations that depend on shape information, like slicing or reshaping, will not work.
See also
dask.dataframe._Frame.values
,dask.dataframe.from_dask_array
Examples
>>> df.to_records() # doctest: +SKIP dask.array<to_records, shape=(nan,), dtype=(numpy.record, [('ind', '<f8'), ('x', 'O'), ('y', '<i8')]), chunksize=(nan,), chunktype=numpy.ndarray> # noqa: E501
-
to_string
(max_rows=5)¶ Render a DataFrame to a console-friendly tabular output.
Parameters: buf : StringIO-like, optional (Not supported in Dask)
Buffer to write to.
columns : sequence, optional, default None (Not supported in Dask)
The subset of columns to write. Writes all columns by default.
col_space : int, optional (Not supported in Dask)
The minimum width of each column.
header : bool, optional (Not supported in Dask)
Write out the column names. If a list of strings is given, it is assumed to be aliases for the column names.
index : bool, optional, default True (Not supported in Dask)
Whether to print index (row) labels.
na_rep : str, optional, default ‘NaN’ (Not supported in Dask)
String representation of NAN to use.
formatters : list or dict of one-param. functions, optional (Not supported in Dask)
Formatter functions to apply to columns’ elements by position or name. The result of each function must be a unicode string. List must be of length equal to the number of columns.
float_format : one-parameter function, optional, default None (Not supported in Dask)
Formatter function to apply to columns’ elements if they are floats. The result of this function must be a unicode string.
sparsify : bool, optional, default True (Not supported in Dask)
Set to False for a DataFrame with a hierarchical index to print every multiindex key at each row.
index_names : bool, optional, default True (Not supported in Dask)
Prints the names of the indexes.
justify : str, default None (Not supported in Dask)
How to justify the column labels. If None uses the option from the print configuration (controlled by set_option), ‘right’ out of the box. Valid values are
This docstring was copied from pandas.core.frame.DataFrame.to_string.
Some inconsistencies with the Dask version may exist.
- left
- right
- center
- justify
- justify-all
- start
- end
- inherit
- match-parent
- initial
- unset.
max_rows : int, optional
Maximum number of rows to display in the console.
min_rows : int, optional (Not supported in Dask)
The number of rows to display in the console in a truncated repr (when number of rows is above max_rows).
max_cols : int, optional (Not supported in Dask)
Maximum number of columns to display in the console.
show_dimensions : bool, default False (Not supported in Dask)
Display DataFrame dimensions (number of rows by number of columns).
decimal : str, default ‘.’ (Not supported in Dask)
Character recognized as decimal separator, e.g. ‘,’ in Europe.
New in version 0.18.0.
line_width : int, optional (Not supported in Dask)
Width to wrap a line in characters.
Returns: str (or unicode, depending on data and options)
String representation of the dataframe.
See also
to_html
- Convert DataFrame to HTML.
Examples
>>> d = {'col1': [1, 2, 3], 'col2': [4, 5, 6]} # doctest: +SKIP >>> df = pd.DataFrame(d) # doctest: +SKIP >>> print(df.to_string()) # doctest: +SKIP col1 col2 0 1 4 1 2 5 2 3 6
-
to_timestamp
(freq=None, how='start', axis=0)¶ Cast to DatetimeIndex of timestamps, at beginning of period.
This docstring was copied from pandas.core.frame.DataFrame.to_timestamp.
Some inconsistencies with the Dask version may exist.
Parameters: freq : str, default frequency of PeriodIndex
Desired frequency.
how : {‘s’, ‘e’, ‘start’, ‘end’}
Convention for converting period to timestamp; start of period vs. end.
axis : {0 or ‘index’, 1 or ‘columns’}, default 0
The axis to convert (the index by default).
copy : bool, default True (Not supported in Dask)
If False then underlying input data is not copied.
Returns: DataFrame with DatetimeIndex
-
truediv
(other, axis='columns', level=None, fill_value=None)¶ Get Floating division of dataframe and other, element-wise (binary operator truediv).
This docstring was copied from pandas.core.frame.DataFrame.truediv.
Some inconsistencies with the Dask version may exist.
Equivalent to
dataframe / other
, but with support to substitute a fill_value for missing data in one of the inputs. With reverse version, rtruediv.Among flexible wrappers (add, sub, mul, div, mod, pow) to arithmetic operators: +, -, *, /, //, %, **.
Parameters: other : scalar, sequence, Series, or DataFrame
Any single or multiple element data structure, or list-like object.
axis : {0 or ‘index’, 1 or ‘columns’}
Whether to compare by the index (0 or ‘index’) or columns (1 or ‘columns’). For Series input, axis to match Series index on.
level : int or label
Broadcast across a level, matching Index values on the passed MultiIndex level.
fill_value : float or None, default None
Fill existing missing (NaN) values, and any new element needed for successful DataFrame alignment, with this value before computation. If data in both corresponding DataFrame locations is missing the result will be missing.
Returns: DataFrame
Result of the arithmetic operation.
See also
DataFrame.add
- Add DataFrames.
DataFrame.sub
- Subtract DataFrames.
DataFrame.mul
- Multiply DataFrames.
DataFrame.div
- Divide DataFrames (float division).
DataFrame.truediv
- Divide DataFrames (float division).
DataFrame.floordiv
- Divide DataFrames (integer division).
DataFrame.mod
- Calculate modulo (remainder after division).
DataFrame.pow
- Calculate exponential power.
Notes
Mismatched indices will be unioned together.
Examples
>>> df = pd.DataFrame({'angles': [0, 3, 4], # doctest: +SKIP ... 'degrees': [360, 180, 360]}, ... index=['circle', 'triangle', 'rectangle']) >>> df # doctest: +SKIP angles degrees circle 0 360 triangle 3 180 rectangle 4 360
Add a scalar with operator version which return the same results.
>>> df + 1 # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
>>> df.add(1) # doctest: +SKIP angles degrees circle 1 361 triangle 4 181 rectangle 5 361
Divide by constant with reverse version.
>>> df.div(10) # doctest: +SKIP angles degrees circle 0.0 36.0 triangle 0.3 18.0 rectangle 0.4 36.0
>>> df.rdiv(10) # doctest: +SKIP angles degrees circle inf 0.027778 triangle 3.333333 0.055556 rectangle 2.500000 0.027778
Subtract a list and Series by axis with operator version.
>>> df - [1, 2] # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub([1, 2], axis='columns') # doctest: +SKIP angles degrees circle -1 358 triangle 2 178 rectangle 3 358
>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']), # doctest: +SKIP ... axis='index') angles degrees circle -1 359 triangle 2 179 rectangle 3 359
Multiply a DataFrame of different shape with operator version.
>>> other = pd.DataFrame({'angles': [0, 3, 4]}, # doctest: +SKIP ... index=['circle', 'triangle', 'rectangle']) >>> other # doctest: +SKIP angles circle 0 triangle 3 rectangle 4
>>> df * other # doctest: +SKIP angles degrees circle 0 NaN triangle 9 NaN rectangle 16 NaN
>>> df.mul(other, fill_value=0) # doctest: +SKIP angles degrees circle 0 0.0 triangle 9 0.0 rectangle 16 0.0
Divide by a MultiIndex by level.
>>> df_multindex = pd.DataFrame({'angles': [0, 3, 4, 4, 5, 6], # doctest: +SKIP ... 'degrees': [360, 180, 360, 360, 540, 720]}, ... index=[['A', 'A', 'A', 'B', 'B', 'B'], ... ['circle', 'triangle', 'rectangle', ... 'square', 'pentagon', 'hexagon']]) >>> df_multindex # doctest: +SKIP angles degrees A circle 0 360 triangle 3 180 rectangle 4 360 B square 4 360 pentagon 5 540 hexagon 6 720
>>> df.div(df_multindex, level=1, fill_value=0) # doctest: +SKIP angles degrees A circle NaN 1.0 triangle 1.0 1.0 rectangle 1.0 1.0 B square 0.0 0.0 pentagon 0.0 0.0 hexagon 0.0 0.0
-
values
¶ Return a dask.array of the values of this dataframe
Warning: This creates a dask.array without precise shape information. Operations that depend on shape information, like slicing or reshaping, will not work.
-
var
(axis=None, skipna=True, ddof=1, split_every=False, dtype=None, out=None)¶ Return unbiased variance over requested axis.
This docstring was copied from pandas.core.frame.DataFrame.var.
Some inconsistencies with the Dask version may exist.
Normalized by N-1 by default. This can be changed using the ddof argument
Parameters: axis : {index (0), columns (1)}
skipna : bool, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA
level : int or level name, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series
ddof : int, default 1
Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements.
numeric_only : bool, default None (Not supported in Dask)
Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.
Returns: Series or DataFrame (if level specified)
-
visualize
(filename='mydask', format=None, optimize_graph=False, **kwargs)¶ Render the computation of this object’s task graph using graphviz.
Requires
graphviz
to be installed.Parameters: filename : str or None, optional
The name (without an extension) of the file to write to disk. If filename is None, no file will be written, and we communicate with dot using only pipes.
format : {‘png’, ‘pdf’, ‘dot’, ‘svg’, ‘jpeg’, ‘jpg’}, optional
Format in which to write output file. Default is ‘png’.
optimize_graph : bool, optional
If True, the graph is optimized before rendering. Otherwise, the graph is displayed as is. Default is False.
color: {None, ‘order’}, optional
Options to color nodes. Provide
cmap=
keyword for additional colormap**kwargs
Additional keyword arguments to forward to
to_graphviz
.Returns: result : IPython.diplay.Image, IPython.display.SVG, or None
See dask.dot.dot_graph for more information.
See also
dask.base.visualize
,dask.dot.dot_graph
Notes
For more information on optimization see here:
https://docs.dask.org/en/latest/optimize.html
Examples
>>> x.visualize(filename='dask.pdf') # doctest: +SKIP >>> x.visualize(filename='dask.pdf', color='order') # doctest: +SKIP
-
where
(cond, other=nan)¶ Replace values where the condition is False.
This docstring was copied from pandas.core.frame.DataFrame.where.
Some inconsistencies with the Dask version may exist.
Parameters: cond : boolean Series/DataFrame, array-like, or callable
Where cond is True, keep the original value. Where False, replace with corresponding value from other. If cond is callable, it is computed on the Series/DataFrame and should return boolean Series/DataFrame or array. The callable must not change input Series/DataFrame (though pandas doesn’t check it).
New in version 0.18.1: A callable can be used as cond.
other : scalar, Series/DataFrame, or callable
Entries where cond is False are replaced with corresponding value from other. If other is callable, it is computed on the Series/DataFrame and should return scalar or Series/DataFrame. The callable must not change input Series/DataFrame (though pandas doesn’t check it).
New in version 0.18.1: A callable can be used as other.
inplace : bool, default False (Not supported in Dask)
Whether to perform the operation in place on the data.
axis : int, default None (Not supported in Dask)
Alignment axis if needed.
level : int, default None (Not supported in Dask)
Alignment level if needed.
errors : str, {‘raise’, ‘ignore’}, default ‘raise’ (Not supported in Dask)
Note that currently this parameter won’t affect the results and will always coerce to a suitable dtype.
- ‘raise’ : allow exceptions to be raised.
- ‘ignore’ : suppress exceptions. On error return original object.
try_cast : bool, default False (Not supported in Dask)
Try to cast the result back to the input type (if possible).
Returns: Same type as caller
See also
DataFrame.mask()
- Return an object of same shape as self.
Notes
The where method is an application of the if-then idiom. For each element in the calling DataFrame, if
cond
isTrue
the element is used; otherwise the corresponding element from the DataFrameother
is used.The signature for
DataFrame.where()
differs fromnumpy.where()
. Roughlydf1.where(m, df2)
is equivalent tonp.where(m, df1, df2)
.For further details and examples see the
where
documentation in indexing.Examples
>>> s = pd.Series(range(5)) # doctest: +SKIP >>> s.where(s > 0) # doctest: +SKIP 0 NaN 1 1.0 2 2.0 3 3.0 4 4.0 dtype: float64
>>> s.mask(s > 0) # doctest: +SKIP 0 0.0 1 NaN 2 NaN 3 NaN 4 NaN dtype: float64
>>> s.where(s > 1, 10) # doctest: +SKIP 0 10 1 10 2 2 3 3 4 4 dtype: int64
>>> df = pd.DataFrame(np.arange(10).reshape(-1, 2), columns=['A', 'B']) # doctest: +SKIP >>> df # doctest: +SKIP A B 0 0 1 1 2 3 2 4 5 3 6 7 4 8 9 >>> m = df % 3 == 0 # doctest: +SKIP >>> df.where(m, -df) # doctest: +SKIP A B 0 0 -1 1 -2 3 2 -4 -5 3 6 -7 4 -8 9 >>> df.where(m, -df) == np.where(m, df, -df) # doctest: +SKIP A B 0 True True 1 True True 2 True True 3 True True 4 True True >>> df.where(m, -df) == df.mask(~m, -df) # doctest: +SKIP A B 0 True True 1 True True 2 True True 3 True True 4 True True
-
Series Methods¶
-
class
dask.dataframe.
Series
(dsk, name, meta, divisions)¶ Parallel Pandas Series
Do not use this class directly. Instead use functions like
dd.read_csv
,dd.read_parquet
, ordd.from_pandas
.Parameters: dsk: dict
The dask graph to compute this Series
_name: str
The key prefix that specifies which keys in the dask comprise this particular Series
meta: pandas.Series
An empty
pandas.Series
with names, dtypes, and index matching the expected output.divisions: tuple of index values
Values along which we partition our blocks on the index
See also
-
abs
()¶ Return a Series/DataFrame with absolute numeric value of each element.
This docstring was copied from pandas.core.frame.DataFrame.abs.
Some inconsistencies with the Dask version may exist.
This function only applies to elements that are all numeric.
Returns: abs
Series/DataFrame containing the absolute value of each element.
See also
numpy.absolute
- Calculate the absolute value element-wise.
Notes
For
complex
inputs,1.2 + 1j
, the absolute value is \(\sqrt{ a^2 + b^2 }\).Examples
Absolute numeric values in a Series.
>>> s = pd.Series([-1.10, 2, -3.33, 4]) # doctest: +SKIP >>> s.abs() # doctest: +SKIP 0 1.10 1 2.00 2 3.33 3 4.00 dtype: float64
Absolute numeric values in a Series with complex numbers.
>>> s = pd.Series([1.2 + 1j]) # doctest: +SKIP >>> s.abs() # doctest: +SKIP 0 1.56205 dtype: float64
Absolute numeric values in a Series with a Timedelta element.
>>> s = pd.Series([pd.Timedelta('1 days')]) # doctest: +SKIP >>> s.abs() # doctest: +SKIP 0 1 days dtype: timedelta64[ns]
Select rows with data closest to certain value using argsort (from StackOverflow).
>>> df = pd.DataFrame({ # doctest: +SKIP ... 'a': [4, 5, 6, 7], ... 'b': [10, 20, 30, 40], ... 'c': [100, 50, -30, -50] ... }) >>> df # doctest: +SKIP a b c 0 4 10 100 1 5 20 50 2 6 30 -30 3 7 40 -50 >>> df.loc[(df.c - 43).abs().argsort()] # doctest: +SKIP a b c 1 5 20 50 0 4 10 100 2 6 30 -30 3 7 40 -50
-
add
(other, level=None, fill_value=None, axis=0)¶ Return Addition of series and other, element-wise (binary operator add).
This docstring was copied from pandas.core.series.Series.add.
Some inconsistencies with the Dask version may exist.
Equivalent to
series + other
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) # doctest: +SKIP >>> a # doctest: +SKIP a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) # doctest: +SKIP >>> b # doctest: +SKIP a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.add(b, fill_value=0) # doctest: +SKIP a 2.0 b 1.0 c 1.0 d 1.0 e NaN dtype: float64
-
align
(other, join='outer', axis=None, fill_value=None)¶ Align two objects on their axes with the specified join method for each axis Index.
This docstring was copied from pandas.core.series.Series.align.
Some inconsistencies with the Dask version may exist.
Parameters: other : DataFrame or Series
join : {‘outer’, ‘inner’, ‘left’, ‘right’}, default ‘outer’
axis : allowed axis of the other object, default None
Align on index (0), columns (1), or both (None)
level : int or level name, default None (Not supported in Dask)
Broadcast across a level, matching Index values on the passed MultiIndex level
copy : boolean, default True (Not supported in Dask)
Always returns new objects. If copy=False and no reindexing is required then original objects are returned.
fill_value : scalar, default np.NaN
Value to use for missing values. Defaults to NaN, but can be any “compatible” value
method : {‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, None}, default None (Not supported in Dask)
Method to use for filling holes in reindexed Series pad / ffill: propagate last valid observation forward to next valid backfill / bfill: use NEXT valid observation to fill gap
limit : int, default None (Not supported in Dask)
If method is specified, this is the maximum number of consecutive NaN values to forward/backward fill. In other words, if there is a gap with more than this number of consecutive NaNs, it will only be partially filled. If method is not specified, this is the maximum number of entries along the entire axis where NaNs will be filled. Must be greater than 0 if not None.
fill_axis : {0 or ‘index’}, default 0 (Not supported in Dask)
Filling axis, method and limit
broadcast_axis : {0 or ‘index’}, default None (Not supported in Dask)
Broadcast values along this axis, if aligning two objects of different dimensions
Returns: (left, right) : (Series, type of other)
Aligned objects.
-
all
(axis=None, skipna=True, split_every=False, out=None)¶ Return whether all elements are True, potentially over an axis.
This docstring was copied from pandas.core.frame.DataFrame.all.
Some inconsistencies with the Dask version may exist.
Returns True unless there at least one element within a series or along a Dataframe axis that is False or equivalent (e.g. zero or empty).
Parameters: axis : {0 or ‘index’, 1 or ‘columns’, None}, default 0
Indicate which axis or axes should be reduced.
- 0 / ‘index’ : reduce the index, return a Series whose index is the original column labels.
- 1 / ‘columns’ : reduce the columns, return a Series whose index is the original index.
- None : reduce all axes, return a scalar.
bool_only : bool, default None (Not supported in Dask)
Include only boolean columns. If None, will attempt to use everything, then use only boolean data. Not implemented for Series.
skipna : bool, default True
Exclude NA/null values. If the entire row/column is NA and skipna is True, then the result will be True, as for an empty row/column. If skipna is False, then NA are treated as True, because these are not equal to zero.
level : int or level name, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.
**kwargs : any, default None
Additional keywords have no effect but might be accepted for compatibility with NumPy.
Returns: Series or DataFrame
If level is specified, then, DataFrame is returned; otherwise, Series is returned.
See also
Series.all
- Return True if all elements are True.
DataFrame.any
- Return True if one (or more) elements are True.
Examples
Series
>>> pd.Series([True, True]).all() # doctest: +SKIP True >>> pd.Series([True, False]).all() # doctest: +SKIP False >>> pd.Series([]).all() # doctest: +SKIP True >>> pd.Series([np.nan]).all() # doctest: +SKIP True >>> pd.Series([np.nan]).all(skipna=False) # doctest: +SKIP True
DataFrames
Create a dataframe from a dictionary.
>>> df = pd.DataFrame({'col1': [True, True], 'col2': [True, False]}) # doctest: +SKIP >>> df # doctest: +SKIP col1 col2 0 True True 1 True False
Default behaviour checks if column-wise values all return True.
>>> df.all() # doctest: +SKIP col1 True col2 False dtype: bool
Specify
axis='columns'
to check if row-wise values all return True.>>> df.all(axis='columns') # doctest: +SKIP 0 True 1 False dtype: bool
Or
axis=None
for whether every value is True.>>> df.all(axis=None) # doctest: +SKIP False
-
any
(axis=None, skipna=True, split_every=False, out=None)¶ Return whether any element is True, potentially over an axis.
This docstring was copied from pandas.core.frame.DataFrame.any.
Some inconsistencies with the Dask version may exist.
Returns False unless there at least one element within a series or along a Dataframe axis that is True or equivalent (e.g. non-zero or non-empty).
Parameters: axis : {0 or ‘index’, 1 or ‘columns’, None}, default 0
Indicate which axis or axes should be reduced.
- 0 / ‘index’ : reduce the index, return a Series whose index is the original column labels.
- 1 / ‘columns’ : reduce the columns, return a Series whose index is the original index.
- None : reduce all axes, return a scalar.
bool_only : bool, default None (Not supported in Dask)
Include only boolean columns. If None, will attempt to use everything, then use only boolean data. Not implemented for Series.
skipna : bool, default True
Exclude NA/null values. If the entire row/column is NA and skipna is True, then the result will be False, as for an empty row/column. If skipna is False, then NA are treated as True, because these are not equal to zero.
level : int or level name, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.
**kwargs : any, default None
Additional keywords have no effect but might be accepted for compatibility with NumPy.
Returns: Series or DataFrame
If level is specified, then, DataFrame is returned; otherwise, Series is returned.
See also
numpy.any
- Numpy version of this method.
Series.any
- Return whether any element is True.
Series.all
- Return whether all elements are True.
DataFrame.any
- Return whether any element is True over requested axis.
DataFrame.all
- Return whether all elements are True over requested axis.
Examples
Series
For Series input, the output is a scalar indicating whether any element is True.
>>> pd.Series([False, False]).any() # doctest: +SKIP False >>> pd.Series([True, False]).any() # doctest: +SKIP True >>> pd.Series([]).any() # doctest: +SKIP False >>> pd.Series([np.nan]).any() # doctest: +SKIP False >>> pd.Series([np.nan]).any(skipna=False) # doctest: +SKIP True
DataFrame
Whether each column contains at least one True element (the default).
>>> df = pd.DataFrame({"A": [1, 2], "B": [0, 2], "C": [0, 0]}) # doctest: +SKIP >>> df # doctest: +SKIP A B C 0 1 0 0 1 2 2 0
>>> df.any() # doctest: +SKIP A True B True C False dtype: bool
Aggregating over the columns.
>>> df = pd.DataFrame({"A": [True, False], "B": [1, 2]}) # doctest: +SKIP >>> df # doctest: +SKIP A B 0 True 1 1 False 2
>>> df.any(axis='columns') # doctest: +SKIP 0 True 1 True dtype: bool
>>> df = pd.DataFrame({"A": [True, False], "B": [1, 0]}) # doctest: +SKIP >>> df # doctest: +SKIP A B 0 True 1 1 False 0
>>> df.any(axis='columns') # doctest: +SKIP 0 True 1 False dtype: bool
Aggregating over the entire DataFrame with
axis=None
.>>> df.any(axis=None) # doctest: +SKIP True
any for an empty DataFrame is an empty Series.
>>> pd.DataFrame([]).any() # doctest: +SKIP Series([], dtype: bool)
-
append
(other, interleave_partitions=False)¶ Concatenate two or more Series.
This docstring was copied from pandas.core.series.Series.append.
Some inconsistencies with the Dask version may exist.
Parameters: to_append : Series or list/tuple of Series (Not supported in Dask)
Series to append with self.
ignore_index : bool, default False (Not supported in Dask)
If True, do not use the index labels.
New in version 0.19.0.
verify_integrity : bool, default False (Not supported in Dask)
If True, raise Exception on creating index with duplicates.
Returns: Series
Concatenated Series.
See also
concat
- General function to concatenate DataFrame or Series objects.
Notes
Iteratively appending to a Series can be more computationally intensive than a single concatenate. A better solution is to append values to a list and then concatenate the list with the original Series all at once.
Examples
>>> s1 = pd.Series([1, 2, 3]) # doctest: +SKIP >>> s2 = pd.Series([4, 5, 6]) # doctest: +SKIP >>> s3 = pd.Series([4, 5, 6], index=[3, 4, 5]) # doctest: +SKIP >>> s1.append(s2) # doctest: +SKIP 0 1 1 2 2 3 0 4 1 5 2 6 dtype: int64
>>> s1.append(s3) # doctest: +SKIP 0 1 1 2 2 3 3 4 4 5 5 6 dtype: int64
With ignore_index set to True:
>>> s1.append(s2, ignore_index=True) # doctest: +SKIP 0 1 1 2 2 3 3 4 4 5 5 6 dtype: int64
With verify_integrity set to True:
>>> s1.append(s2, verify_integrity=True) # doctest: +SKIP Traceback (most recent call last): ... ValueError: Indexes have overlapping values: [0, 1, 2]
-
apply
(func, convert_dtype=True, meta='__no_default__', args=(), **kwds)¶ Parallel version of pandas.Series.apply
Parameters: func : function
Function to apply
convert_dtype : boolean, default True
Try to find better dtype for elementwise function results. If False, leave as dtype=object.
meta : pd.DataFrame, pd.Series, dict, iterable, tuple, optional
An empty
pd.DataFrame
orpd.Series
that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of aDataFrame
, adict
of{name: dtype}
or iterable of(name, dtype)
can be provided (note that the order of the names should match the order of the columns). Instead of a series, a tuple of(name, dtype)
can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providingmeta
is recommended. For more information, seedask.dataframe.utils.make_meta
.args : tuple
Positional arguments to pass to function in addition to the value.
Additional keyword arguments will be passed as keywords to the function.
Returns: applied : Series or DataFrame if func returns a Series.
See also
dask.Series.map_partitions
Examples
>>> import dask.dataframe as dd >>> s = pd.Series(range(5), name='x') >>> ds = dd.from_pandas(s, npartitions=2)
Apply a function elementwise across the Series, passing in extra arguments in
args
andkwargs
:>>> def myadd(x, a, b=1): ... return x + a + b >>> res = ds.apply(myadd, args=(2,), b=1.5) # doctest: +SKIP
By default, dask tries to infer the output metadata by running your provided function on some fake data. This works well in many cases, but can sometimes be expensive, or even fail. To avoid this, you can manually specify the output metadata with the
meta
keyword. This can be specified in many forms, for more information seedask.dataframe.utils.make_meta
.Here we specify the output is a Series with name
'x'
, and dtypefloat64
:>>> res = ds.apply(myadd, args=(2,), b=1.5, meta=('x', 'f8'))
In the case where the metadata doesn’t change, you can also pass in the object itself directly:
>>> res = ds.apply(lambda x: x + 1, meta=ds)
-
astype
(dtype)¶ Cast a pandas object to a specified dtype
dtype
.This docstring was copied from pandas.core.frame.DataFrame.astype.
Some inconsistencies with the Dask version may exist.
Parameters: dtype : data type, or dict of column name -> data type
Use a numpy.dtype or Python type to cast entire pandas object to the same type. Alternatively, use {col: dtype, …}, where col is a column label and dtype is a numpy.dtype or Python type to cast one or more of the DataFrame’s columns to column-specific types.
copy : bool, default True (Not supported in Dask)
Return a copy when
copy=True
(be very careful settingcopy=False
as changes to values then may propagate to other pandas objects).errors : {‘raise’, ‘ignore’}, default ‘raise’ (Not supported in Dask)
Control raising of exceptions on invalid data for provided dtype.
raise
: allow exceptions to be raisedignore
: suppress exceptions. On error return original object
New in version 0.20.0.
kwargs : keyword arguments to pass on to the constructor
Returns: casted : same type as caller
See also
to_datetime
- Convert argument to datetime.
to_timedelta
- Convert argument to timedelta.
to_numeric
- Convert argument to a numeric type.
numpy.ndarray.astype
- Cast a numpy array to a specified type.
Examples
Create a DataFrame:
>>> d = {'col1': [1, 2], 'col2': [3, 4]} # doctest: +SKIP >>> df = pd.DataFrame(data=d) # doctest: +SKIP >>> df.dtypes # doctest: +SKIP col1 int64 col2 int64 dtype: object
Cast all columns to int32:
>>> df.astype('int32').dtypes # doctest: +SKIP col1 int32 col2 int32 dtype: object
Cast col1 to int32 using a dictionary:
>>> df.astype({'col1': 'int32'}).dtypes # doctest: +SKIP col1 int32 col2 int64 dtype: object
Create a series:
>>> ser = pd.Series([1, 2], dtype='int32') # doctest: +SKIP >>> ser # doctest: +SKIP 0 1 1 2 dtype: int32 >>> ser.astype('int64') # doctest: +SKIP 0 1 1 2 dtype: int64
Convert to categorical type:
>>> ser.astype('category') # doctest: +SKIP 0 1 1 2 dtype: category Categories (2, int64): [1, 2]
Convert to ordered categorical type with custom ordering:
>>> cat_dtype = pd.api.types.CategoricalDtype( # doctest: +SKIP ... categories=[2, 1], ordered=True) >>> ser.astype(cat_dtype) # doctest: +SKIP 0 1 1 2 dtype: category Categories (2, int64): [2 < 1]
Note that using
copy=False
and changing data on a new pandas object may propagate changes:>>> s1 = pd.Series([1,2]) # doctest: +SKIP >>> s2 = s1.astype('int64', copy=False) # doctest: +SKIP >>> s2[0] = 10 # doctest: +SKIP >>> s1 # note that s1[0] has changed too # doctest: +SKIP 0 10 1 2 dtype: int64
-
autocorr
(lag=1, split_every=False)¶ Compute the lag-N autocorrelation.
This docstring was copied from pandas.core.series.Series.autocorr.
Some inconsistencies with the Dask version may exist.
This method computes the Pearson correlation between the Series and its shifted self.
Parameters: lag : int, default 1
Number of lags to apply before performing autocorrelation.
Returns: float
The Pearson correlation between self and self.shift(lag).
See also
Series.corr
- Compute the correlation between two Series.
Series.shift
- Shift index by desired number of periods.
DataFrame.corr
- Compute pairwise correlation of columns.
DataFrame.corrwith
- Compute pairwise correlation between rows or columns of two DataFrame objects.
Notes
If the Pearson correlation is not well defined return ‘NaN’.
Examples
>>> s = pd.Series([0.25, 0.5, 0.2, -0.05]) # doctest: +SKIP >>> s.autocorr() # doctest: +ELLIPSIS, +SKIP 0.10355... >>> s.autocorr(lag=2) # doctest: +ELLIPSIS, +SKIP -0.99999...
If the Pearson correlation is not well defined, then ‘NaN’ is returned.
>>> s = pd.Series([1, 0, 0, 0]) # doctest: +SKIP >>> s.autocorr() # doctest: +SKIP nan
-
between
(left, right, inclusive=True)¶ Return boolean Series equivalent to left <= series <= right.
This docstring was copied from pandas.core.series.Series.between.
Some inconsistencies with the Dask version may exist.
This function returns a boolean vector containing True wherever the corresponding Series element is between the boundary values left and right. NA values are treated as False.
Parameters: left : scalar
Left boundary.
right : scalar
Right boundary.
inclusive : bool, default True
Include boundaries.
Returns: Series
Series representing whether each element is between left and right (inclusive).
Notes
This function is equivalent to
(left <= ser) & (ser <= right)
Examples
>>> s = pd.Series([2, 0, 4, 8, np.nan]) # doctest: +SKIP
Boundary values are included by default:
>>> s.between(1, 4) # doctest: +SKIP 0 True 1 False 2 True 3 False 4 False dtype: bool
With inclusive set to
False
boundary values are excluded:>>> s.between(1, 4, inclusive=False) # doctest: +SKIP 0 True 1 False 2 False 3 False 4 False dtype: bool
left and right can be any scalar value:
>>> s = pd.Series(['Alice', 'Bob', 'Carol', 'Eve']) # doctest: +SKIP >>> s.between('Anna', 'Daniel') # doctest: +SKIP 0 False 1 True 2 True 3 False dtype: bool
-
bfill
(axis=None, limit=None)¶ Synonym for
DataFrame.fillna()
withmethod='bfill'
.This docstring was copied from pandas.core.frame.DataFrame.bfill.
Some inconsistencies with the Dask version may exist.
Returns: %(klass)s
Object with missing values filled.
-
clear_divisions
()¶ Forget division information
-
clip
(lower=None, upper=None, out=None)¶ Trim values at input threshold(s).
This docstring was copied from pandas.core.series.Series.clip.
Some inconsistencies with the Dask version may exist.
Assigns values outside boundary to boundary values. Thresholds can be singular values or array like, and in the latter case the clipping is performed element-wise in the specified axis.
Parameters: lower : float or array_like, default None
Minimum threshold value. All values below this threshold will be set to it.
upper : float or array_like, default None
Maximum threshold value. All values above this threshold will be set to it.
axis : int or str axis name, optional (Not supported in Dask)
Align object with lower and upper along the given axis.
inplace : bool, default False (Not supported in Dask)
Whether to perform the operation in place on the data.
New in version 0.21.0.
*args, **kwargs
Additional keywords have no effect but might be accepted for compatibility with numpy.
Returns: Series or DataFrame
Same type as calling object with the values outside the clip boundaries replaced.
Examples
>>> data = {'col_0': [9, -3, 0, -1, 5], 'col_1': [-2, -7, 6, 8, -5]} # doctest: +SKIP >>> df = pd.DataFrame(data) # doctest: +SKIP >>> df # doctest: +SKIP col_0 col_1 0 9 -2 1 -3 -7 2 0 6 3 -1 8 4 5 -5
Clips per column using lower and upper thresholds:
>>> df.clip(-4, 6) # doctest: +SKIP col_0 col_1 0 6 -2 1 -3 -4 2 0 6 3 -1 6 4 5 -4
Clips using specific lower and upper thresholds per column element:
>>> t = pd.Series([2, -4, -1, 6, 3]) # doctest: +SKIP >>> t # doctest: +SKIP 0 2 1 -4 2 -1 3 6 4 3 dtype: int64
>>> df.clip(t, t + 4, axis=0) # doctest: +SKIP col_0 col_1 0 6 2 1 -3 -4 2 0 3 3 6 8 4 5 3
-
clip_lower
(threshold)¶ Trim values below a given threshold.
This docstring was copied from pandas.core.series.Series.clip_lower.
Some inconsistencies with the Dask version may exist.
Deprecated since version 0.24.0: Use clip(lower=threshold) instead.
Elements below the threshold will be changed to match the threshold value(s). Threshold can be a single value or an array, in the latter case it performs the truncation element-wise.
Parameters: threshold : numeric or array-like
Minimum value allowed. All values below threshold will be set to this value.
- float : every value is compared to threshold.
- array-like : The shape of threshold should match the object
it’s compared to. When self is a Series, threshold should be
the length. When self is a DataFrame, threshold should 2-D
and the same shape as self for
axis=None
, or 1-D and the same length as the axis being compared.
axis : {0 or ‘index’, 1 or ‘columns’}, default 0 (Not supported in Dask)
Align self with threshold along the given axis.
inplace : bool, default False (Not supported in Dask)
Whether to perform the operation in place on the data.
New in version 0.21.0.
Returns: Series or DataFrame
Original data with values trimmed.
See also
Series.clip
- General purpose method to trim Series values to given threshold(s).
DataFrame.clip
- General purpose method to trim DataFrame values to given threshold(s).
Examples
Series single threshold clipping:
>>> s = pd.Series([5, 6, 7, 8, 9]) # doctest: +SKIP >>> s.clip(lower=8) # doctest: +SKIP 0 8 1 8 2 8 3 8 4 9 dtype: int64
Series clipping element-wise using an array of thresholds. threshold should be the same length as the Series.
>>> elemwise_thresholds = [4, 8, 7, 2, 5] # doctest: +SKIP >>> s.clip(lower=elemwise_thresholds) # doctest: +SKIP 0 5 1 8 2 7 3 8 4 9 dtype: int64
DataFrames can be compared to a scalar.
>>> df = pd.DataFrame({"A": [1, 3, 5], "B": [2, 4, 6]}) # doctest: +SKIP >>> df # doctest: +SKIP A B 0 1 2 1 3 4 2 5 6
>>> df.clip(lower=3) # doctest: +SKIP A B 0 3 3 1 3 4 2 5 6
Or to an array of values. By default, threshold should be the same shape as the DataFrame.
>>> df.clip(lower=np.array([[3, 4], [2, 2], [6, 2]])) # doctest: +SKIP A B 0 3 4 1 3 4 2 6 6
Control how threshold is broadcast with axis. In this case threshold should be the same length as the axis specified by axis.
>>> df.clip(lower=[3, 3, 5], axis='index') # doctest: +SKIP A B 0 3 3 1 3 4 2 5 6
>>> df.clip(lower=[4, 5], axis='columns') # doctest: +SKIP A B 0 4 5 1 4 5 2 5 6
-
clip_upper
(threshold)¶ Trim values above a given threshold.
This docstring was copied from pandas.core.series.Series.clip_upper.
Some inconsistencies with the Dask version may exist.
Deprecated since version 0.24.0: Use clip(upper=threshold) instead.
Elements above the threshold will be changed to match the threshold value(s). Threshold can be a single value or an array, in the latter case it performs the truncation element-wise.
Parameters: threshold : numeric or array-like
Maximum value allowed. All values above threshold will be set to this value.
- float : every value is compared to threshold.
- array-like : The shape of threshold should match the object
it’s compared to. When self is a Series, threshold should be
the length. When self is a DataFrame, threshold should 2-D
and the same shape as self for
axis=None
, or 1-D and the same length as the axis being compared.
axis : {0 or ‘index’, 1 or ‘columns’}, default 0 (Not supported in Dask)
Align object with threshold along the given axis.
inplace : bool, default False (Not supported in Dask)
Whether to perform the operation in place on the data.
New in version 0.21.0.
Returns: Series or DataFrame
Original data with values trimmed.
See also
Series.clip
- General purpose method to trim Series values to given threshold(s).
DataFrame.clip
- General purpose method to trim DataFrame values to given threshold(s).
Examples
>>> s = pd.Series([1, 2, 3, 4, 5]) # doctest: +SKIP >>> s # doctest: +SKIP 0 1 1 2 2 3 3 4 4 5 dtype: int64
>>> s.clip(upper=3) # doctest: +SKIP 0 1 1 2 2 3 3 3 4 3 dtype: int64
>>> elemwise_thresholds = [5, 4, 3, 2, 1] # doctest: +SKIP >>> elemwise_thresholds # doctest: +SKIP [5, 4, 3, 2, 1]
>>> s.clip(upper=elemwise_thresholds) # doctest: +SKIP 0 1 1 2 2 3 3 2 4 1 dtype: int64
-
combine
(other, func, fill_value=None)¶ Combine the Series with a Series or scalar according to func.
This docstring was copied from pandas.core.series.Series.combine.
Some inconsistencies with the Dask version may exist.
Combine the Series and other using func to perform elementwise selection for combined Series. fill_value is assumed when value is missing at some index from one of the two objects being combined.
Parameters: other : Series or scalar
The value(s) to be combined with the Series.
func : function
Function that takes two scalars as inputs and returns an element.
fill_value : scalar, optional
The value to assume when an index is missing from one Series or the other. The default specifies to use the appropriate NaN value for the underlying dtype of the Series.
Returns: Series
The result of combining the Series with the other object.
See also
Series.combine_first
- Combine Series values, choosing the calling Series’ values first.
Examples
Consider 2 Datasets
s1
ands2
containing highest clocked speeds of different birds.>>> s1 = pd.Series({'falcon': 330.0, 'eagle': 160.0}) # doctest: +SKIP >>> s1 # doctest: +SKIP falcon 330.0 eagle 160.0 dtype: float64 >>> s2 = pd.Series({'falcon': 345.0, 'eagle': 200.0, 'duck': 30.0}) # doctest: +SKIP >>> s2 # doctest: +SKIP falcon 345.0 eagle 200.0 duck 30.0 dtype: float64
Now, to combine the two datasets and view the highest speeds of the birds across the two datasets
>>> s1.combine(s2, max) # doctest: +SKIP duck NaN eagle 200.0 falcon 345.0 dtype: float64
In the previous example, the resulting value for duck is missing, because the maximum of a NaN and a float is a NaN. So, in the example, we set
fill_value=0
, so the maximum value returned will be the value from some dataset.>>> s1.combine(s2, max, fill_value=0) # doctest: +SKIP duck 30.0 eagle 200.0 falcon 345.0 dtype: float64
-
combine_first
(other)¶ Combine Series values, choosing the calling Series’s values first.
This docstring was copied from pandas.core.series.Series.combine_first.
Some inconsistencies with the Dask version may exist.
Parameters: other : Series
The value(s) to be combined with the Series.
Returns: Series
The result of combining the Series with the other object.
See also
Series.combine
- Perform elementwise operation on two Series using a given function.
Notes
Result index will be the union of the two indexes.
Examples
>>> s1 = pd.Series([1, np.nan]) # doctest: +SKIP >>> s2 = pd.Series([3, 4]) # doctest: +SKIP >>> s1.combine_first(s2) # doctest: +SKIP 0 1.0 1 4.0 dtype: float64
-
compute
(**kwargs)¶ Compute this dask collection
This turns a lazy Dask collection into its in-memory equivalent. For example a Dask.array turns into a
numpy.array()
and a Dask.dataframe turns into a Pandas dataframe. The entire dataset must fit into memory before calling this operation.Parameters: scheduler : string, optional
Which scheduler to use like “threads”, “synchronous” or “processes”. If not provided, the default is to check the global settings first, and then fall back to the collection defaults.
optimize_graph : bool, optional
If True [default], the graph is optimized before computation. Otherwise the graph is run as is. This can be useful for debugging.
kwargs
Extra keywords to forward to the scheduler function.
See also
dask.base.compute
-
copy
()¶ Make a copy of the dataframe
This is strictly a shallow copy of the underlying computational graph. It does not affect the underlying data
-
corr
(other, method='pearson', min_periods=None, split_every=False)¶ Compute correlation with other Series, excluding missing values.
This docstring was copied from pandas.core.series.Series.corr.
Some inconsistencies with the Dask version may exist.
Parameters: other : Series
Series with which to compute the correlation.
method : {‘pearson’, ‘kendall’, ‘spearman’} or callable
- pearson : standard correlation coefficient
- kendall : Kendall Tau correlation coefficient
- spearman : Spearman rank correlation
- callable: callable with input two 1d ndarrays
- and returning a float. Note that the returned matrix from corr will have 1 along the diagonals and will be symmetric regardless of the callable’s behavior .. versionadded:: 0.24.0
min_periods : int, optional
Minimum number of observations needed to have a valid result.
Returns: float
Correlation with other.
Examples
>>> def histogram_intersection(a, b): # doctest: +SKIP ... v = np.minimum(a, b).sum().round(decimals=1) ... return v >>> s1 = pd.Series([.2, .0, .6, .2]) # doctest: +SKIP >>> s2 = pd.Series([.3, .6, .0, .1]) # doctest: +SKIP >>> s1.corr(s2, method=histogram_intersection) # doctest: +SKIP 0.3
-
count
(split_every=False)¶ Return number of non-NA/null observations in the Series.
This docstring was copied from pandas.core.series.Series.count.
Some inconsistencies with the Dask version may exist.
Parameters: level : int or level name, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a smaller Series.
Returns: int or Series (if level specified)
Number of non-null values in the Series.
Examples
>>> s = pd.Series([0.0, 1.0, np.nan]) # doctest: +SKIP >>> s.count() # doctest: +SKIP 2
-
cov
(other, min_periods=None, split_every=False)¶ Compute covariance with Series, excluding missing values.
This docstring was copied from pandas.core.series.Series.cov.
Some inconsistencies with the Dask version may exist.
Parameters: other : Series
Series with which to compute the covariance.
min_periods : int, optional
Minimum number of observations needed to have a valid result.
Returns: float
Covariance between Series and other normalized by N-1 (unbiased estimator).
Examples
>>> s1 = pd.Series([0.90010907, 0.13484424, 0.62036035]) # doctest: +SKIP >>> s2 = pd.Series([0.12528585, 0.26962463, 0.51111198]) # doctest: +SKIP >>> s1.cov(s2) # doctest: +SKIP -0.01685762652715874
-
cummax
(axis=None, skipna=True, out=None)¶ Return cumulative maximum over a DataFrame or Series axis.
This docstring was copied from pandas.core.frame.DataFrame.cummax.
Some inconsistencies with the Dask version may exist.
Returns a DataFrame or Series of the same size containing the cumulative maximum.
Parameters: axis : {0 or ‘index’, 1 or ‘columns’}, default 0
The index or the name of the axis. 0 is equivalent to None or ‘index’.
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA.
*args, **kwargs :
Additional keywords have no effect but might be accepted for compatibility with NumPy.
Returns: Series or DataFrame
See also
core.window.Expanding.max
- Similar functionality but ignores
NaN
values. DataFrame.max
- Return the maximum over DataFrame axis.
DataFrame.cummax
- Return cumulative maximum over DataFrame axis.
DataFrame.cummin
- Return cumulative minimum over DataFrame axis.
DataFrame.cumsum
- Return cumulative sum over DataFrame axis.
DataFrame.cumprod
- Return cumulative product over DataFrame axis.
Examples
Series
>>> s = pd.Series([2, np.nan, 5, -1, 0]) # doctest: +SKIP >>> s # doctest: +SKIP 0 2.0 1 NaN 2 5.0 3 -1.0 4 0.0 dtype: float64
By default, NA values are ignored.
>>> s.cummax() # doctest: +SKIP 0 2.0 1 NaN 2 5.0 3 5.0 4 5.0 dtype: float64
To include NA values in the operation, use
skipna=False
>>> s.cummax(skipna=False) # doctest: +SKIP 0 2.0 1 NaN 2 NaN 3 NaN 4 NaN dtype: float64
DataFrame
>>> df = pd.DataFrame([[2.0, 1.0], # doctest: +SKIP ... [3.0, np.nan], ... [1.0, 0.0]], ... columns=list('AB')) >>> df # doctest: +SKIP A B 0 2.0 1.0 1 3.0 NaN 2 1.0 0.0
By default, iterates over rows and finds the maximum in each column. This is equivalent to
axis=None
oraxis='index'
.>>> df.cummax() # doctest: +SKIP A B 0 2.0 1.0 1 3.0 NaN 2 3.0 1.0
To iterate over columns and find the maximum in each row, use
axis=1
>>> df.cummax(axis=1) # doctest: +SKIP A B 0 2.0 2.0 1 3.0 NaN 2 1.0 1.0
-
cummin
(axis=None, skipna=True, out=None)¶ Return cumulative minimum over a DataFrame or Series axis.
This docstring was copied from pandas.core.frame.DataFrame.cummin.
Some inconsistencies with the Dask version may exist.
Returns a DataFrame or Series of the same size containing the cumulative minimum.
Parameters: axis : {0 or ‘index’, 1 or ‘columns’}, default 0
The index or the name of the axis. 0 is equivalent to None or ‘index’.
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA.
*args, **kwargs :
Additional keywords have no effect but might be accepted for compatibility with NumPy.
Returns: Series or DataFrame
See also
core.window.Expanding.min
- Similar functionality but ignores
NaN
values. DataFrame.min
- Return the minimum over DataFrame axis.
DataFrame.cummax
- Return cumulative maximum over DataFrame axis.
DataFrame.cummin
- Return cumulative minimum over DataFrame axis.
DataFrame.cumsum
- Return cumulative sum over DataFrame axis.
DataFrame.cumprod
- Return cumulative product over DataFrame axis.
Examples
Series
>>> s = pd.Series([2, np.nan, 5, -1, 0]) # doctest: +SKIP >>> s # doctest: +SKIP 0 2.0 1 NaN 2 5.0 3 -1.0 4 0.0 dtype: float64
By default, NA values are ignored.
>>> s.cummin() # doctest: +SKIP 0 2.0 1 NaN 2 2.0 3 -1.0 4 -1.0 dtype: float64
To include NA values in the operation, use
skipna=False
>>> s.cummin(skipna=False) # doctest: +SKIP 0 2.0 1 NaN 2 NaN 3 NaN 4 NaN dtype: float64
DataFrame
>>> df = pd.DataFrame([[2.0, 1.0], # doctest: +SKIP ... [3.0, np.nan], ... [1.0, 0.0]], ... columns=list('AB')) >>> df # doctest: +SKIP A B 0 2.0 1.0 1 3.0 NaN 2 1.0 0.0
By default, iterates over rows and finds the minimum in each column. This is equivalent to
axis=None
oraxis='index'
.>>> df.cummin() # doctest: +SKIP A B 0 2.0 1.0 1 2.0 NaN 2 1.0 0.0
To iterate over columns and find the minimum in each row, use
axis=1
>>> df.cummin(axis=1) # doctest: +SKIP A B 0 2.0 1.0 1 3.0 NaN 2 1.0 0.0
-
cumprod
(axis=None, skipna=True, dtype=None, out=None)¶ Return cumulative product over a DataFrame or Series axis.
This docstring was copied from pandas.core.frame.DataFrame.cumprod.
Some inconsistencies with the Dask version may exist.
Returns a DataFrame or Series of the same size containing the cumulative product.
Parameters: axis : {0 or ‘index’, 1 or ‘columns’}, default 0
The index or the name of the axis. 0 is equivalent to None or ‘index’.
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA.
*args, **kwargs :
Additional keywords have no effect but might be accepted for compatibility with NumPy.
Returns: Series or DataFrame
See also
core.window.Expanding.prod
- Similar functionality but ignores
NaN
values. DataFrame.prod
- Return the product over DataFrame axis.
DataFrame.cummax
- Return cumulative maximum over DataFrame axis.
DataFrame.cummin
- Return cumulative minimum over DataFrame axis.
DataFrame.cumsum
- Return cumulative sum over DataFrame axis.
DataFrame.cumprod
- Return cumulative product over DataFrame axis.
Examples
Series
>>> s = pd.Series([2, np.nan, 5, -1, 0]) # doctest: +SKIP >>> s # doctest: +SKIP 0 2.0 1 NaN 2 5.0 3 -1.0 4 0.0 dtype: float64
By default, NA values are ignored.
>>> s.cumprod() # doctest: +SKIP 0 2.0 1 NaN 2 10.0 3 -10.0 4 -0.0 dtype: float64
To include NA values in the operation, use
skipna=False
>>> s.cumprod(skipna=False) # doctest: +SKIP 0 2.0 1 NaN 2 NaN 3 NaN 4 NaN dtype: float64
DataFrame
>>> df = pd.DataFrame([[2.0, 1.0], # doctest: +SKIP ... [3.0, np.nan], ... [1.0, 0.0]], ... columns=list('AB')) >>> df # doctest: +SKIP A B 0 2.0 1.0 1 3.0 NaN 2 1.0 0.0
By default, iterates over rows and finds the product in each column. This is equivalent to
axis=None
oraxis='index'
.>>> df.cumprod() # doctest: +SKIP A B 0 2.0 1.0 1 6.0 NaN 2 6.0 0.0
To iterate over columns and find the product in each row, use
axis=1
>>> df.cumprod(axis=1) # doctest: +SKIP A B 0 2.0 2.0 1 3.0 NaN 2 1.0 0.0
-
cumsum
(axis=None, skipna=True, dtype=None, out=None)¶ Return cumulative sum over a DataFrame or Series axis.
This docstring was copied from pandas.core.frame.DataFrame.cumsum.
Some inconsistencies with the Dask version may exist.
Returns a DataFrame or Series of the same size containing the cumulative sum.
Parameters: axis : {0 or ‘index’, 1 or ‘columns’}, default 0
The index or the name of the axis. 0 is equivalent to None or ‘index’.
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA.
*args, **kwargs :
Additional keywords have no effect but might be accepted for compatibility with NumPy.
Returns: Series or DataFrame
See also
core.window.Expanding.sum
- Similar functionality but ignores
NaN
values. DataFrame.sum
- Return the sum over DataFrame axis.
DataFrame.cummax
- Return cumulative maximum over DataFrame axis.
DataFrame.cummin
- Return cumulative minimum over DataFrame axis.
DataFrame.cumsum
- Return cumulative sum over DataFrame axis.
DataFrame.cumprod
- Return cumulative product over DataFrame axis.
Examples
Series
>>> s = pd.Series([2, np.nan, 5, -1, 0]) # doctest: +SKIP >>> s # doctest: +SKIP 0 2.0 1 NaN 2 5.0 3 -1.0 4 0.0 dtype: float64
By default, NA values are ignored.
>>> s.cumsum() # doctest: +SKIP 0 2.0 1 NaN 2 7.0 3 6.0 4 6.0 dtype: float64
To include NA values in the operation, use
skipna=False
>>> s.cumsum(skipna=False) # doctest: +SKIP 0 2.0 1 NaN 2 NaN 3 NaN 4 NaN dtype: float64
DataFrame
>>> df = pd.DataFrame([[2.0, 1.0], # doctest: +SKIP ... [3.0, np.nan], ... [1.0, 0.0]], ... columns=list('AB')) >>> df # doctest: +SKIP A B 0 2.0 1.0 1 3.0 NaN 2 1.0 0.0
By default, iterates over rows and finds the sum in each column. This is equivalent to
axis=None
oraxis='index'
.>>> df.cumsum() # doctest: +SKIP A B 0 2.0 1.0 1 5.0 NaN 2 6.0 1.0
To iterate over columns and find the sum in each row, use
axis=1
>>> df.cumsum(axis=1) # doctest: +SKIP A B 0 2.0 3.0 1 3.0 NaN 2 1.0 1.0
-
describe
(split_every=False, percentiles=None, percentiles_method='default', include=None, exclude=None)¶ Generate descriptive statistics that summarize the central tendency, dispersion and shape of a dataset’s distribution, excluding
NaN
values.This docstring was copied from pandas.core.frame.DataFrame.describe.
Some inconsistencies with the Dask version may exist.
Analyzes both numeric and object series, as well as
DataFrame
column sets of mixed data types. The output will vary depending on what is provided. Refer to the notes below for more detail.Parameters: percentiles : list-like of numbers, optional
The percentiles to include in the output. All should fall between 0 and 1. The default is
[.25, .5, .75]
, which returns the 25th, 50th, and 75th percentiles.include : ‘all’, list-like of dtypes or None (default), optional
A white list of data types to include in the result. Ignored for
Series
. Here are the options:- ‘all’ : All columns of the input will be included in the output.
- A list-like of dtypes : Limits the results to the
provided data types.
To limit the result to numeric types submit
numpy.number
. To limit it instead to object columns submit thenumpy.object
data type. Strings can also be used in the style ofselect_dtypes
(e.g.df.describe(include=['O'])
). To select pandas categorical columns, use'category'
- None (default) : The result will include all numeric columns.
exclude : list-like of dtypes or None (default), optional,
A black list of data types to omit from the result. Ignored for
Series
. Here are the options:- A list-like of dtypes : Excludes the provided data types
from the result. To exclude numeric types submit
numpy.number
. To exclude object columns submit the data typenumpy.object
. Strings can also be used in the style ofselect_dtypes
(e.g.df.describe(include=['O'])
). To exclude pandas categorical columns, use'category'
- None (default) : The result will exclude nothing.
Returns: Series or DataFrame
Summary statistics of the Series or Dataframe provided.
See also
DataFrame.count
- Count number of non-NA/null observations.
DataFrame.max
- Maximum of the values in the object.
DataFrame.min
- Minimum of the values in the object.
DataFrame.mean
- Mean of the values.
DataFrame.std
- Standard deviation of the observations.
DataFrame.select_dtypes
- Subset of a DataFrame including/excluding columns based on their dtype.
Notes
For numeric data, the result’s index will include
count
,mean
,std
,min
,max
as well as lower,50
and upper percentiles. By default the lower percentile is25
and the upper percentile is75
. The50
percentile is the same as the median.For object data (e.g. strings or timestamps), the result’s index will include
count
,unique
,top
, andfreq
. Thetop
is the most common value. Thefreq
is the most common value’s frequency. Timestamps also include thefirst
andlast
items.If multiple object values have the highest count, then the
count
andtop
results will be arbitrarily chosen from among those with the highest count.For mixed data types provided via a
DataFrame
, the default is to return only an analysis of numeric columns. If the dataframe consists only of object and categorical data without any numeric columns, the default is to return an analysis of both the object and categorical columns. Ifinclude='all'
is provided as an option, the result will include a union of attributes of each type.The include and exclude parameters can be used to limit which columns in a
DataFrame
are analyzed for the output. The parameters are ignored when analyzing aSeries
.Examples
Describing a numeric
Series
.>>> s = pd.Series([1, 2, 3]) # doctest: +SKIP >>> s.describe() # doctest: +SKIP count 3.0 mean 2.0 std 1.0 min 1.0 25% 1.5 50% 2.0 75% 2.5 max 3.0 dtype: float64
Describing a categorical
Series
.>>> s = pd.Series(['a', 'a', 'b', 'c']) # doctest: +SKIP >>> s.describe() # doctest: +SKIP count 4 unique 3 top a freq 2 dtype: object
Describing a timestamp
Series
.>>> s = pd.Series([ # doctest: +SKIP ... np.datetime64("2000-01-01"), ... np.datetime64("2010-01-01"), ... np.datetime64("2010-01-01") ... ]) >>> s.describe() # doctest: +SKIP count 3 unique 2 top 2010-01-01 00:00:00 freq 2 first 2000-01-01 00:00:00 last 2010-01-01 00:00:00 dtype: object
Describing a
DataFrame
. By default only numeric fields are returned.>>> df = pd.DataFrame({'categorical': pd.Categorical(['d','e','f']), # doctest: +SKIP ... 'numeric': [1, 2, 3], ... 'object': ['a', 'b', 'c'] ... }) >>> df.describe() # doctest: +SKIP numeric count 3.0 mean 2.0 std 1.0 min 1.0 25% 1.5 50% 2.0 75% 2.5 max 3.0
Describing all columns of a
DataFrame
regardless of data type.>>> df.describe(include='all') # doctest: +SKIP categorical numeric object count 3 3.0 3 unique 3 NaN 3 top f NaN c freq 1 NaN 1 mean NaN 2.0 NaN std NaN 1.0 NaN min NaN 1.0 NaN 25% NaN 1.5 NaN 50% NaN 2.0 NaN 75% NaN 2.5 NaN max NaN 3.0 NaN
Describing a column from a
DataFrame
by accessing it as an attribute.>>> df.numeric.describe() # doctest: +SKIP count 3.0 mean 2.0 std 1.0 min 1.0 25% 1.5 50% 2.0 75% 2.5 max 3.0 Name: numeric, dtype: float64
Including only numeric columns in a
DataFrame
description.>>> df.describe(include=[np.number]) # doctest: +SKIP numeric count 3.0 mean 2.0 std 1.0 min 1.0 25% 1.5 50% 2.0 75% 2.5 max 3.0
Including only string columns in a
DataFrame
description.>>> df.describe(include=[np.object]) # doctest: +SKIP object count 3 unique 3 top c freq 1
Including only categorical columns from a
DataFrame
description.>>> df.describe(include=['category']) # doctest: +SKIP categorical count 3 unique 3 top f freq 1
Excluding numeric columns from a
DataFrame
description.>>> df.describe(exclude=[np.number]) # doctest: +SKIP categorical object count 3 3 unique 3 3 top f c freq 1 1
Excluding object columns from a
DataFrame
description.>>> df.describe(exclude=[np.object]) # doctest: +SKIP categorical numeric count 3 3.0 unique 3 NaN top f NaN freq 1 NaN mean NaN 2.0 std NaN 1.0 min NaN 1.0 25% NaN 1.5 50% NaN 2.0 75% NaN 2.5 max NaN 3.0
-
diff
(periods=1, axis=0)¶ First discrete difference of element.
This docstring was copied from pandas.core.frame.DataFrame.diff.
Some inconsistencies with the Dask version may exist.
Note
Pandas currently uses an
object
-dtype column to represent boolean data with missing values. This can cause issues for boolean-specific operations, like|
. To enable boolean- specific operations, at the cost of metadata that doesn’t match pandas, use.astype(bool)
after theshift
.Calculates the difference of a DataFrame element compared with another element in the DataFrame (default is the element in the same column of the previous row).
Parameters: periods : int, default 1
Periods to shift for calculating difference, accepts negative values.
axis : {0 or ‘index’, 1 or ‘columns’}, default 0
Take difference over rows (0) or columns (1).
New in version 0.16.1..
Returns: DataFrame
See also
Series.diff
- First discrete difference for a Series.
DataFrame.pct_change
- Percent change over given number of periods.
DataFrame.shift
- Shift index by desired number of periods with an optional time freq.
Examples
Difference with previous row
>>> df = pd.DataFrame({'a': [1, 2, 3, 4, 5, 6], # doctest: +SKIP ... 'b': [1, 1, 2, 3, 5, 8], ... 'c': [1, 4, 9, 16, 25, 36]}) >>> df # doctest: +SKIP a b c 0 1 1 1 1 2 1 4 2 3 2 9 3 4 3 16 4 5 5 25 5 6 8 36
>>> df.diff() # doctest: +SKIP a b c 0 NaN NaN NaN 1 1.0 0.0 3.0 2 1.0 1.0 5.0 3 1.0 1.0 7.0 4 1.0 2.0 9.0 5 1.0 3.0 11.0
Difference with previous column
>>> df.diff(axis=1) # doctest: +SKIP a b c 0 NaN 0.0 0.0 1 NaN -1.0 3.0 2 NaN -1.0 7.0 3 NaN -1.0 13.0 4 NaN 0.0 20.0 5 NaN 2.0 28.0
Difference with 3rd previous row
>>> df.diff(periods=3) # doctest: +SKIP a b c 0 NaN NaN NaN 1 NaN NaN NaN 2 NaN NaN NaN 3 3.0 2.0 15.0 4 3.0 4.0 21.0 5 3.0 6.0 27.0
Difference with following row
>>> df.diff(periods=-1) # doctest: +SKIP a b c 0 -1.0 0.0 -3.0 1 -1.0 -1.0 -5.0 2 -1.0 -1.0 -7.0 3 -1.0 -2.0 -9.0 4 -1.0 -3.0 -11.0 5 NaN NaN NaN
-
div
(other, level=None, fill_value=None, axis=0)¶ Return Floating division of series and other, element-wise (binary operator truediv).
This docstring was copied from pandas.core.series.Series.div.
Some inconsistencies with the Dask version may exist.
Equivalent to
series / other
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) # doctest: +SKIP >>> a # doctest: +SKIP a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) # doctest: +SKIP >>> b # doctest: +SKIP a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.divide(b, fill_value=0) # doctest: +SKIP a 1.0 b inf c inf d 0.0 e NaN dtype: float64
-
divide
(other, level=None, fill_value=None, axis=0)¶ Return Floating division of series and other, element-wise (binary operator truediv).
This docstring was copied from pandas.core.series.Series.divide.
Some inconsistencies with the Dask version may exist.
Equivalent to
series / other
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) # doctest: +SKIP >>> a # doctest: +SKIP a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) # doctest: +SKIP >>> b # doctest: +SKIP a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.divide(b, fill_value=0) # doctest: +SKIP a 1.0 b inf c inf d 0.0 e NaN dtype: float64
-
drop_duplicates
(subset=None, split_every=None, split_out=1, **kwargs)¶ Return DataFrame with duplicate rows removed, optionally only considering certain columns. Indexes, including time indexes are ignored.
This docstring was copied from pandas.core.frame.DataFrame.drop_duplicates.
Some inconsistencies with the Dask version may exist.
Parameters: subset : column label or sequence of labels, optional
Only consider certain columns for identifying duplicates, by default use all of the columns
keep : {‘first’, ‘last’, False}, default ‘first’ (Not supported in Dask)
first
: Drop duplicates except for the first occurrence.last
: Drop duplicates except for the last occurrence.- False : Drop all duplicates.
inplace : boolean, default False (Not supported in Dask)
Whether to drop duplicates in place or to return a copy
Returns: DataFrame
-
dropna
()¶ Return a new Series with missing values removed.
This docstring was copied from pandas.core.series.Series.dropna.
Some inconsistencies with the Dask version may exist.
See the User Guide for more on which values are considered missing, and how to work with missing data.
Parameters: axis : {0 or ‘index’}, default 0 (Not supported in Dask)
There is only one axis to drop values from.
inplace : bool, default False (Not supported in Dask)
If True, do operation inplace and return None.
**kwargs
Not in use.
Returns: Series
Series with NA entries dropped from it.
See also
Series.isna
- Indicate missing values.
Series.notna
- Indicate existing (non-missing) values.
Series.fillna
- Replace missing values.
DataFrame.dropna
- Drop rows or columns which contain NA values.
Index.dropna
- Drop missing indices.
Examples
>>> ser = pd.Series([1., 2., np.nan]) # doctest: +SKIP >>> ser # doctest: +SKIP 0 1.0 1 2.0 2 NaN dtype: float64
Drop NA values from a Series.
>>> ser.dropna() # doctest: +SKIP 0 1.0 1 2.0 dtype: float64
Keep the Series with valid entries in the same variable.
>>> ser.dropna(inplace=True) # doctest: +SKIP >>> ser # doctest: +SKIP 0 1.0 1 2.0 dtype: float64
Empty strings are not considered NA values.
None
is considered an NA value.>>> ser = pd.Series([np.NaN, 2, pd.NaT, '', None, 'I stay']) # doctest: +SKIP >>> ser # doctest: +SKIP 0 NaN 1 2 2 NaT 3 4 None 5 I stay dtype: object >>> ser.dropna() # doctest: +SKIP 1 2 3 5 I stay dtype: object
-
dt
¶ Namespace of datetime methods
-
dtype
¶ Return data type
-
eq
(other, level=None, fill_value=None, axis=0)¶ Return Equal to of series and other, element-wise (binary operator eq).
This docstring was copied from pandas.core.series.Series.eq.
Some inconsistencies with the Dask version may exist.
Equivalent to
series == other
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Series.None
-
explode
()¶ Transform each element of a list-like to a row, replicating the index values.
This docstring was copied from pandas.core.series.Series.explode.
Some inconsistencies with the Dask version may exist.
New in version 0.25.0.
Returns: Series
Exploded lists to rows; index will be duplicated for these rows.
See also
Series.str.split
- Split string values on specified separator.
Series.unstack
- Unstack, a.k.a. pivot, Series with MultiIndex to produce DataFrame.
DataFrame.melt
- Unpivot a DataFrame from wide format to long format
DataFrame.explode
- Explode a DataFrame from list-like columns to long format.
Notes
This routine will explode list-likes including lists, tuples, Series, and np.ndarray. The result dtype of the subset rows will be object. Scalars will be returned unchanged. Empty list-likes will result in a np.nan for that row.
Examples
>>> s = pd.Series([[1, 2, 3], 'foo', [], [3, 4]]) # doctest: +SKIP >>> s # doctest: +SKIP 0 [1, 2, 3] 1 foo 2 [] 3 [3, 4] dtype: object
>>> s.explode() # doctest: +SKIP 0 1 0 2 0 3 1 foo 2 NaN 3 3 3 4 dtype: object
-
ffill
(axis=None, limit=None)¶ Synonym for
DataFrame.fillna()
withmethod='ffill'
.This docstring was copied from pandas.core.frame.DataFrame.ffill.
Some inconsistencies with the Dask version may exist.
Returns: %(klass)s
Object with missing values filled.
-
fillna
(value=None, method=None, limit=None, axis=None)¶ Fill NA/NaN values using the specified method.
This docstring was copied from pandas.core.frame.DataFrame.fillna.
Some inconsistencies with the Dask version may exist.
Parameters: value : scalar, dict, Series, or DataFrame
Value to use to fill holes (e.g. 0), alternately a dict/Series/DataFrame of values specifying which value to use for each index (for a Series) or column (for a DataFrame). Values not in the dict/Series/DataFrame will not be filled. This value cannot be a list.
method : {‘backfill’, ‘bfill’, ‘pad’, ‘ffill’, None}, default None
Method to use for filling holes in reindexed Series pad / ffill: propagate last valid observation forward to next valid backfill / bfill: use next valid observation to fill gap.
axis : {0 or ‘index’, 1 or ‘columns’}
Axis along which to fill missing values.
inplace : bool, default False (Not supported in Dask)
If True, fill in-place. Note: this will modify any other views on this object (e.g., a no-copy slice for a column in a DataFrame).
limit : int, default None
If method is specified, this is the maximum number of consecutive NaN values to forward/backward fill. In other words, if there is a gap with more than this number of consecutive NaNs, it will only be partially filled. If method is not specified, this is the maximum number of entries along the entire axis where NaNs will be filled. Must be greater than 0 if not None.
downcast : dict, default is None (Not supported in Dask)
A dict of item->dtype of what to downcast if possible, or the string ‘infer’ which will try to downcast to an appropriate equal type (e.g. float64 to int64 if possible).
Returns: DataFrame
Object with missing values filled.
See also
interpolate
- Fill NaN values using interpolation.
reindex
- Conform object to new index.
asfreq
- Convert TimeSeries to specified frequency.
Examples
>>> df = pd.DataFrame([[np.nan, 2, np.nan, 0], # doctest: +SKIP ... [3, 4, np.nan, 1], ... [np.nan, np.nan, np.nan, 5], ... [np.nan, 3, np.nan, 4]], ... columns=list('ABCD')) >>> df # doctest: +SKIP A B C D 0 NaN 2.0 NaN 0 1 3.0 4.0 NaN 1 2 NaN NaN NaN 5 3 NaN 3.0 NaN 4
Replace all NaN elements with 0s.
>>> df.fillna(0) # doctest: +SKIP A B C D 0 0.0 2.0 0.0 0 1 3.0 4.0 0.0 1 2 0.0 0.0 0.0 5 3 0.0 3.0 0.0 4
We can also propagate non-null values forward or backward.
>>> df.fillna(method='ffill') # doctest: +SKIP A B C D 0 NaN 2.0 NaN 0 1 3.0 4.0 NaN 1 2 3.0 4.0 NaN 5 3 3.0 3.0 NaN 4
Replace all NaN elements in column ‘A’, ‘B’, ‘C’, and ‘D’, with 0, 1, 2, and 3 respectively.
>>> values = {'A': 0, 'B': 1, 'C': 2, 'D': 3} # doctest: +SKIP >>> df.fillna(value=values) # doctest: +SKIP A B C D 0 0.0 2.0 2.0 0 1 3.0 4.0 2.0 1 2 0.0 1.0 2.0 5 3 0.0 3.0 2.0 4
Only replace the first NaN element.
>>> df.fillna(value=values, limit=1) # doctest: +SKIP A B C D 0 0.0 2.0 2.0 0 1 3.0 4.0 NaN 1 2 NaN 1.0 NaN 5 3 NaN 3.0 NaN 4
-
first
(offset)¶ Convenience method for subsetting initial periods of time series data based on a date offset.
This docstring was copied from pandas.core.frame.DataFrame.first.
Some inconsistencies with the Dask version may exist.
Parameters: offset : string, DateOffset, dateutil.relativedelta
Returns: subset : same type as caller
Raises: TypeError
If the index is not a
DatetimeIndex
See also
last
- Select final periods of time series based on a date offset.
at_time
- Select values at a particular time of the day.
between_time
- Select values between particular times of the day.
Examples
>>> i = pd.date_range('2018-04-09', periods=4, freq='2D') # doctest: +SKIP >>> ts = pd.DataFrame({'A': [1,2,3,4]}, index=i) # doctest: +SKIP >>> ts # doctest: +SKIP A 2018-04-09 1 2018-04-11 2 2018-04-13 3 2018-04-15 4
Get the rows for the first 3 days:
>>> ts.first('3D') # doctest: +SKIP A 2018-04-09 1 2018-04-11 2
Notice the data for 3 first calender days were returned, not the first 3 days observed in the dataset, and therefore data for 2018-04-13 was not returned.
-
floordiv
(other, level=None, fill_value=None, axis=0)¶ Return Integer division of series and other, element-wise (binary operator floordiv).
This docstring was copied from pandas.core.series.Series.floordiv.
Some inconsistencies with the Dask version may exist.
Equivalent to
series // other
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) # doctest: +SKIP >>> a # doctest: +SKIP a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) # doctest: +SKIP >>> b # doctest: +SKIP a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.floordiv(b, fill_value=0) # doctest: +SKIP a 1.0 b NaN c NaN d 0.0 e NaN dtype: float64
-
ge
(other, level=None, fill_value=None, axis=0)¶ Return Greater than or equal to of series and other, element-wise (binary operator ge).
This docstring was copied from pandas.core.series.Series.ge.
Some inconsistencies with the Dask version may exist.
Equivalent to
series >= other
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Series.None
-
get_partition
(n)¶ Get a dask DataFrame/Series representing the nth partition.
-
groupby
(by=None, **kwargs)¶ Group DataFrame or Series using a mapper or by a Series of columns.
This docstring was copied from pandas.core.series.Series.groupby.
Some inconsistencies with the Dask version may exist.
A groupby operation involves some combination of splitting the object, applying a function, and combining the results. This can be used to group large amounts of data and compute operations on these groups.
Parameters: by : mapping, function, label, or list of labels
Used to determine the groups for the groupby. If
by
is a function, it’s called on each value of the object’s index. If a dict or Series is passed, the Series or dict VALUES will be used to determine the groups (the Series’ values are first aligned; see.align()
method). If an ndarray is passed, the values are used as-is determine the groups. A label or list of labels may be passed to group by the columns inself
. Notice that a tuple is interpreted as a (single) key.axis : {0 or ‘index’, 1 or ‘columns’}, default 0 (Not supported in Dask)
Split along rows (0) or columns (1).
level : int, level name, or sequence of such, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), group by a particular level or levels.
as_index : bool, default True (Not supported in Dask)
For aggregated output, return object with group labels as the index. Only relevant for DataFrame input. as_index=False is effectively “SQL-style” grouped output.
sort : bool, default True (Not supported in Dask)
Sort group keys. Get better performance by turning this off. Note this does not influence the order of observations within each group. Groupby preserves the order of rows within each group.
group_keys : bool, default True (Not supported in Dask)
When calling apply, add group keys to index to identify pieces.
squeeze : bool, default False (Not supported in Dask)
Reduce the dimensionality of the return type if possible, otherwise return a consistent type.
observed : bool, default False (Not supported in Dask)
This only applies if any of the groupers are Categoricals. If True: only show observed values for categorical groupers. If False: show all values for categorical groupers.
New in version 0.23.0.
**kwargs
Optional, only accepts keyword argument ‘mutated’ and is passed to groupby.
Returns: DataFrameGroupBy or SeriesGroupBy
Depends on the calling object and returns groupby object that contains information about the groups.
See also
resample
- Convenience method for frequency conversion and resampling of time series.
Notes
See the user guide for more.
Examples
>>> df = pd.DataFrame({'Animal': ['Falcon', 'Falcon', # doctest: +SKIP ... 'Parrot', 'Parrot'], ... 'Max Speed': [380., 370., 24., 26.]}) >>> df # doctest: +SKIP Animal Max Speed 0 Falcon 380.0 1 Falcon 370.0 2 Parrot 24.0 3 Parrot 26.0 >>> df.groupby(['Animal']).mean() # doctest: +SKIP Max Speed Animal Falcon 375.0 Parrot 25.0
Hierarchical Indexes
We can groupby different levels of a hierarchical index using the level parameter:
>>> arrays = [['Falcon', 'Falcon', 'Parrot', 'Parrot'], # doctest: +SKIP ... ['Captive', 'Wild', 'Captive', 'Wild']] >>> index = pd.MultiIndex.from_arrays(arrays, names=('Animal', 'Type')) # doctest: +SKIP >>> df = pd.DataFrame({'Max Speed': [390., 350., 30., 20.]}, # doctest: +SKIP ... index=index) >>> df # doctest: +SKIP Max Speed Animal Type Falcon Captive 390.0 Wild 350.0 Parrot Captive 30.0 Wild 20.0 >>> df.groupby(level=0).mean() # doctest: +SKIP Max Speed Animal Falcon 370.0 Parrot 25.0 >>> df.groupby(level=1).mean() # doctest: +SKIP Max Speed Type Captive 210.0 Wild 185.0
-
gt
(other, level=None, fill_value=None, axis=0)¶ Return Greater than of series and other, element-wise (binary operator gt).
This docstring was copied from pandas.core.series.Series.gt.
Some inconsistencies with the Dask version may exist.
Equivalent to
series > other
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Series.None
-
head
(n=5, npartitions=1, compute=True)¶ First n rows of the dataset
Parameters: n : int, optional
The number of rows to return. Default is 5.
npartitions : int, optional
Elements are only taken from the first
npartitions
, with a default of 1. If there are fewer thann
rows in the firstnpartitions
a warning will be raised and any found rows returned. Pass -1 to use all partitions.compute : bool, optional
Whether to compute the result, default is True.
-
idxmax
(axis=None, skipna=True, split_every=False)¶ Return index of first occurrence of maximum over requested axis. NA/null values are excluded.
This docstring was copied from pandas.core.frame.DataFrame.idxmax.
Some inconsistencies with the Dask version may exist.
Parameters: axis : {0 or ‘index’, 1 or ‘columns’}, default 0
0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA.
Returns: Series
Indexes of maxima along the specified axis.
Raises: ValueError
- If the row/column is empty
See also
Notes
This method is the DataFrame version of
ndarray.argmax
.
-
idxmin
(axis=None, skipna=True, split_every=False)¶ Return index of first occurrence of minimum over requested axis. NA/null values are excluded.
This docstring was copied from pandas.core.frame.DataFrame.idxmin.
Some inconsistencies with the Dask version may exist.
Parameters: axis : {0 or ‘index’, 1 or ‘columns’}, default 0
0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA.
Returns: Series
Indexes of minima along the specified axis.
Raises: ValueError
- If the row/column is empty
See also
Notes
This method is the DataFrame version of
ndarray.argmin
.
-
index
¶ Return dask Index instance
-
isin
(values)¶ Check whether values are contained in Series.
This docstring was copied from pandas.core.series.Series.isin.
Some inconsistencies with the Dask version may exist.
Return a boolean Series showing whether each element in the Series matches an element in the passed sequence of values exactly.
Parameters: values : set or list-like
The sequence of values to test. Passing in a single string will raise a
TypeError
. Instead, turn a single string into a list of one element.New in version 0.18.1: Support for values as a set.
Returns: Series
Series of booleans indicating if each element is in values.
Raises: TypeError
- If values is a string
See also
DataFrame.isin
- Equivalent method on DataFrame.
Examples
>>> s = pd.Series(['lama', 'cow', 'lama', 'beetle', 'lama', # doctest: +SKIP ... 'hippo'], name='animal') >>> s.isin(['cow', 'lama']) # doctest: +SKIP 0 True 1 True 2 True 3 False 4 True 5 False Name: animal, dtype: bool
Passing a single string as
s.isin('lama')
will raise an error. Use a list of one element instead:>>> s.isin(['lama']) # doctest: +SKIP 0 True 1 False 2 True 3 False 4 True 5 False Name: animal, dtype: bool
-
isna
()¶ Detect missing values.
This docstring was copied from pandas.core.frame.DataFrame.isna.
Some inconsistencies with the Dask version may exist.
Return a boolean same-sized object indicating if the values are NA. NA values, such as None or
numpy.NaN
, gets mapped to True values. Everything else gets mapped to False values. Characters such as empty strings''
ornumpy.inf
are not considered NA values (unless you setpandas.options.mode.use_inf_as_na = True
).Returns: DataFrame
Mask of bool values for each element in DataFrame that indicates whether an element is not an NA value.
See also
DataFrame.isnull
- Alias of isna.
DataFrame.notna
- Boolean inverse of isna.
DataFrame.dropna
- Omit axes labels with missing values.
isna
- Top-level isna.
Examples
Show which entries in a DataFrame are NA.
>>> df = pd.DataFrame({'age': [5, 6, np.NaN], # doctest: +SKIP ... 'born': [pd.NaT, pd.Timestamp('1939-05-27'), ... pd.Timestamp('1940-04-25')], ... 'name': ['Alfred', 'Batman', ''], ... 'toy': [None, 'Batmobile', 'Joker']}) >>> df # doctest: +SKIP age born name toy 0 5.0 NaT Alfred None 1 6.0 1939-05-27 Batman Batmobile 2 NaN 1940-04-25 Joker
>>> df.isna() # doctest: +SKIP age born name toy 0 False True False True 1 False False False False 2 True False False False
Show which entries in a Series are NA.
>>> ser = pd.Series([5, 6, np.NaN]) # doctest: +SKIP >>> ser # doctest: +SKIP 0 5.0 1 6.0 2 NaN dtype: float64
>>> ser.isna() # doctest: +SKIP 0 False 1 False 2 True dtype: bool
-
isnull
()¶ Detect missing values.
This docstring was copied from pandas.core.frame.DataFrame.isnull.
Some inconsistencies with the Dask version may exist.
Return a boolean same-sized object indicating if the values are NA. NA values, such as None or
numpy.NaN
, gets mapped to True values. Everything else gets mapped to False values. Characters such as empty strings''
ornumpy.inf
are not considered NA values (unless you setpandas.options.mode.use_inf_as_na = True
).Returns: DataFrame
Mask of bool values for each element in DataFrame that indicates whether an element is not an NA value.
See also
DataFrame.isnull
- Alias of isna.
DataFrame.notna
- Boolean inverse of isna.
DataFrame.dropna
- Omit axes labels with missing values.
isna
- Top-level isna.
Examples
Show which entries in a DataFrame are NA.
>>> df = pd.DataFrame({'age': [5, 6, np.NaN], # doctest: +SKIP ... 'born': [pd.NaT, pd.Timestamp('1939-05-27'), ... pd.Timestamp('1940-04-25')], ... 'name': ['Alfred', 'Batman', ''], ... 'toy': [None, 'Batmobile', 'Joker']}) >>> df # doctest: +SKIP age born name toy 0 5.0 NaT Alfred None 1 6.0 1939-05-27 Batman Batmobile 2 NaN 1940-04-25 Joker
>>> df.isna() # doctest: +SKIP age born name toy 0 False True False True 1 False False False False 2 True False False False
Show which entries in a Series are NA.
>>> ser = pd.Series([5, 6, np.NaN]) # doctest: +SKIP >>> ser # doctest: +SKIP 0 5.0 1 6.0 2 NaN dtype: float64
>>> ser.isna() # doctest: +SKIP 0 False 1 False 2 True dtype: bool
-
iteritems
()¶ Lazily iterate over (index, value) tuples.
This docstring was copied from pandas.core.series.Series.iteritems.
Some inconsistencies with the Dask version may exist.
This method returns an iterable tuple (index, value). This is convenient if you want to create a lazy iterator.
Returns: iterable
Iterable of tuples containing the (index, value) pairs from a Series.
See also
DataFrame.items
- Equivalent to Series.items for DataFrame.
Examples
>>> s = pd.Series(['A', 'B', 'C']) # doctest: +SKIP >>> for index, value in s.items(): # doctest: +SKIP ... print("Index : {}, Value : {}".format(index, value)) Index : 0, Value : A Index : 1, Value : B Index : 2, Value : C
-
known_divisions
¶ Whether divisions are already known
-
last
(offset)¶ Convenience method for subsetting final periods of time series data based on a date offset.
This docstring was copied from pandas.core.frame.DataFrame.last.
Some inconsistencies with the Dask version may exist.
Parameters: offset : string, DateOffset, dateutil.relativedelta
Returns: subset : same type as caller
Raises: TypeError
If the index is not a
DatetimeIndex
See also
first
- Select initial periods of time series based on a date offset.
at_time
- Select values at a particular time of the day.
between_time
- Select values between particular times of the day.
Examples
>>> i = pd.date_range('2018-04-09', periods=4, freq='2D') # doctest: +SKIP >>> ts = pd.DataFrame({'A': [1,2,3,4]}, index=i) # doctest: +SKIP >>> ts # doctest: +SKIP A 2018-04-09 1 2018-04-11 2 2018-04-13 3 2018-04-15 4
Get the rows for the last 3 days:
>>> ts.last('3D') # doctest: +SKIP A 2018-04-13 3 2018-04-15 4
Notice the data for 3 last calender days were returned, not the last 3 observed days in the dataset, and therefore data for 2018-04-11 was not returned.
-
le
(other, level=None, fill_value=None, axis=0)¶ Return Less than or equal to of series and other, element-wise (binary operator le).
This docstring was copied from pandas.core.series.Series.le.
Some inconsistencies with the Dask version may exist.
Equivalent to
series <= other
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Series.None
-
loc
¶ Purely label-location based indexer for selection by label.
>>> df.loc["b"] # doctest: +SKIP >>> df.loc["b":"d"] # doctest: +SKIP
-
lt
(other, level=None, fill_value=None, axis=0)¶ Return Less than of series and other, element-wise (binary operator lt).
This docstring was copied from pandas.core.series.Series.lt.
Some inconsistencies with the Dask version may exist.
Equivalent to
series < other
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Series.None
-
map
(arg, na_action=None, meta='__no_default__')¶ Map values of Series according to input correspondence.
This docstring was copied from pandas.core.series.Series.map.
Some inconsistencies with the Dask version may exist.
Used for substituting each value in a Series with another value, that may be derived from a function, a
dict
or aSeries
.Parameters: arg : function, dict, or Series
Mapping correspondence.
na_action : {None, ‘ignore’}, default None
If ‘ignore’, propagate NaN values, without passing them to the mapping correspondence.
meta : pd.DataFrame, pd.Series, dict, iterable, tuple, optional
An empty
pd.DataFrame
orpd.Series
that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of aDataFrame
, adict
of{name: dtype}
or iterable of(name, dtype)
can be provided (note that the order of the names should match the order of the columns). Instead of a series, a tuple of(name, dtype)
can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providingmeta
is recommended. For more information, seedask.dataframe.utils.make_meta
.Returns: Series
Same index as caller.
See also
Series.apply
- For applying more complex functions on a Series.
DataFrame.apply
- Apply a function row-/column-wise.
DataFrame.applymap
- Apply a function elementwise on a whole DataFrame.
Notes
When
arg
is a dictionary, values in Series that are not in the dictionary (as keys) are converted toNaN
. However, if the dictionary is adict
subclass that defines__missing__
(i.e. provides a method for default values), then this default is used rather thanNaN
.Examples
>>> s = pd.Series(['cat', 'dog', np.nan, 'rabbit']) # doctest: +SKIP >>> s # doctest: +SKIP 0 cat 1 dog 2 NaN 3 rabbit dtype: object
map
accepts adict
or aSeries
. Values that are not found in thedict
are converted toNaN
, unless the dict has a default value (e.g.defaultdict
):>>> s.map({'cat': 'kitten', 'dog': 'puppy'}) # doctest: +SKIP 0 kitten 1 puppy 2 NaN 3 NaN dtype: object
It also accepts a function:
>>> s.map('I am a {}'.format) # doctest: +SKIP 0 I am a cat 1 I am a dog 2 I am a nan 3 I am a rabbit dtype: object
To avoid applying the function to missing values (and keep them as
NaN
)na_action='ignore'
can be used:>>> s.map('I am a {}'.format, na_action='ignore') # doctest: +SKIP 0 I am a cat 1 I am a dog 2 NaN 3 I am a rabbit dtype: object
-
map_overlap
(func, before, after, *args, **kwargs)¶ Apply a function to each partition, sharing rows with adjacent partitions.
This can be useful for implementing windowing functions such as
df.rolling(...).mean()
ordf.diff()
.Parameters: func : function
Function applied to each partition.
before : int
The number of rows to prepend to partition
i
from the end of partitioni - 1
.after : int
The number of rows to append to partition
i
from the beginning of partitioni + 1
.args, kwargs :
Arguments and keywords to pass to the function. The partition will be the first argument, and these will be passed after.
meta : pd.DataFrame, pd.Series, dict, iterable, tuple, optional
An empty
pd.DataFrame
orpd.Series
that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of aDataFrame
, adict
of{name: dtype}
or iterable of(name, dtype)
can be provided (note that the order of the names should match the order of the columns). Instead of a series, a tuple of(name, dtype)
can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providingmeta
is recommended. For more information, seedask.dataframe.utils.make_meta
.Notes
Given positive integers
before
andafter
, and a functionfunc
,map_overlap
does the following:- Prepend
before
rows to each partitioni
from the end of partitioni - 1
. The first partition has no rows prepended. - Append
after
rows to each partitioni
from the beginning of partitioni + 1
. The last partition has no rows appended. - Apply
func
to each partition, passing in any extraargs
andkwargs
if provided. - Trim
before
rows from the beginning of all but the first partition. - Trim
after
rows from the end of all but the last partition.
Note that the index and divisions are assumed to remain unchanged.
Examples
Given a DataFrame, Series, or Index, such as:
>>> import dask.dataframe as dd >>> df = pd.DataFrame({'x': [1, 2, 4, 7, 11], ... 'y': [1., 2., 3., 4., 5.]}) >>> ddf = dd.from_pandas(df, npartitions=2)
A rolling sum with a trailing moving window of size 2 can be computed by overlapping 2 rows before each partition, and then mapping calls to
df.rolling(2).sum()
:>>> ddf.compute() x y 0 1 1.0 1 2 2.0 2 4 3.0 3 7 4.0 4 11 5.0 >>> ddf.map_overlap(lambda df: df.rolling(2).sum(), 2, 0).compute() x y 0 NaN NaN 1 3.0 3.0 2 6.0 5.0 3 11.0 7.0 4 18.0 9.0
The pandas
diff
method computes a discrete difference shifted by a number of periods (can be positive or negative). This can be implemented by mapping calls todf.diff
to each partition after prepending/appending that many rows, depending on sign:>>> def diff(df, periods=1): ... before, after = (periods, 0) if periods > 0 else (0, -periods) ... return df.map_overlap(lambda df, periods=1: df.diff(periods), ... periods, 0, periods=periods) >>> diff(ddf, 1).compute() x y 0 NaN NaN 1 1.0 1.0 2 2.0 1.0 3 3.0 1.0 4 4.0 1.0
If you have a
DatetimeIndex
, you can use apd.Timedelta
for time- based windows.>>> ts = pd.Series(range(10), index=pd.date_range('2017', periods=10)) >>> dts = dd.from_pandas(ts, npartitions=2) >>> dts.map_overlap(lambda df: df.rolling('2D').sum(), ... pd.Timedelta('2D'), 0).compute() 2017-01-01 0.0 2017-01-02 1.0 2017-01-03 3.0 2017-01-04 5.0 2017-01-05 7.0 2017-01-06 9.0 2017-01-07 11.0 2017-01-08 13.0 2017-01-09 15.0 2017-01-10 17.0 Freq: D, dtype: float64
- Prepend
-
map_partitions
(func, *args, **kwargs)¶ Apply Python function on each DataFrame partition.
Note that the index and divisions are assumed to remain unchanged.
Parameters: func : function
Function applied to each partition.
args, kwargs :
Arguments and keywords to pass to the function. The partition will be the first argument, and these will be passed after. Arguments and keywords may contain
Scalar
,Delayed
or regular python objects. DataFrame-like args (both dask and pandas) will be repartitioned to align (if necessary) before applying the function.meta : pd.DataFrame, pd.Series, dict, iterable, tuple, optional
An empty
pd.DataFrame
orpd.Series
that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of aDataFrame
, adict
of{name: dtype}
or iterable of(name, dtype)
can be provided (note that the order of the names should match the order of the columns). Instead of a series, a tuple of(name, dtype)
can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providingmeta
is recommended. For more information, seedask.dataframe.utils.make_meta
.Examples
Given a DataFrame, Series, or Index, such as:
>>> import dask.dataframe as dd >>> df = pd.DataFrame({'x': [1, 2, 3, 4, 5], ... 'y': [1., 2., 3., 4., 5.]}) >>> ddf = dd.from_pandas(df, npartitions=2)
One can use
map_partitions
to apply a function on each partition. Extra arguments and keywords can optionally be provided, and will be passed to the function after the partition.Here we apply a function with arguments and keywords to a DataFrame, resulting in a Series:
>>> def myadd(df, a, b=1): ... return df.x + df.y + a + b >>> res = ddf.map_partitions(myadd, 1, b=2) >>> res.dtype dtype('float64')
By default, dask tries to infer the output metadata by running your provided function on some fake data. This works well in many cases, but can sometimes be expensive, or even fail. To avoid this, you can manually specify the output metadata with the
meta
keyword. This can be specified in many forms, for more information seedask.dataframe.utils.make_meta
.Here we specify the output is a Series with no name, and dtype
float64
:>>> res = ddf.map_partitions(myadd, 1, b=2, meta=(None, 'f8'))
Here we map a function that takes in a DataFrame, and returns a DataFrame with a new column:
>>> res = ddf.map_partitions(lambda df: df.assign(z=df.x * df.y)) >>> res.dtypes x int64 y float64 z float64 dtype: object
As before, the output metadata can also be specified manually. This time we pass in a
dict
, as the output is a DataFrame:>>> res = ddf.map_partitions(lambda df: df.assign(z=df.x * df.y), ... meta={'x': 'i8', 'y': 'f8', 'z': 'f8'})
In the case where the metadata doesn’t change, you can also pass in the object itself directly:
>>> res = ddf.map_partitions(lambda df: df.head(), meta=df)
Also note that the index and divisions are assumed to remain unchanged. If the function you’re mapping changes the index/divisions, you’ll need to clear them afterwards:
>>> ddf.map_partitions(func).clear_divisions() # doctest: +SKIP
-
mask
(cond, other=nan)¶ Replace values where the condition is True.
This docstring was copied from pandas.core.frame.DataFrame.mask.
Some inconsistencies with the Dask version may exist.
Parameters: cond : boolean Series/DataFrame, array-like, or callable
Where cond is False, keep the original value. Where True, replace with corresponding value from other. If cond is callable, it is computed on the Series/DataFrame and should return boolean Series/DataFrame or array. The callable must not change input Series/DataFrame (though pandas doesn’t check it).
New in version 0.18.1: A callable can be used as cond.
other : scalar, Series/DataFrame, or callable
Entries where cond is True are replaced with corresponding value from other. If other is callable, it is computed on the Series/DataFrame and should return scalar or Series/DataFrame. The callable must not change input Series/DataFrame (though pandas doesn’t check it).
New in version 0.18.1: A callable can be used as other.
inplace : bool, default False (Not supported in Dask)
Whether to perform the operation in place on the data.
axis : int, default None (Not supported in Dask)
Alignment axis if needed.
level : int, default None (Not supported in Dask)
Alignment level if needed.
errors : str, {‘raise’, ‘ignore’}, default ‘raise’ (Not supported in Dask)
Note that currently this parameter won’t affect the results and will always coerce to a suitable dtype.
- ‘raise’ : allow exceptions to be raised.
- ‘ignore’ : suppress exceptions. On error return original object.
try_cast : bool, default False (Not supported in Dask)
Try to cast the result back to the input type (if possible).
Returns: Same type as caller
See also
DataFrame.where()
- Return an object of same shape as self.
Notes
The mask method is an application of the if-then idiom. For each element in the calling DataFrame, if
cond
isFalse
the element is used; otherwise the corresponding element from the DataFrameother
is used.The signature for
DataFrame.where()
differs fromnumpy.where()
. Roughlydf1.where(m, df2)
is equivalent tonp.where(m, df1, df2)
.For further details and examples see the
mask
documentation in indexing.Examples
>>> s = pd.Series(range(5)) # doctest: +SKIP >>> s.where(s > 0) # doctest: +SKIP 0 NaN 1 1.0 2 2.0 3 3.0 4 4.0 dtype: float64
>>> s.mask(s > 0) # doctest: +SKIP 0 0.0 1 NaN 2 NaN 3 NaN 4 NaN dtype: float64
>>> s.where(s > 1, 10) # doctest: +SKIP 0 10 1 10 2 2 3 3 4 4 dtype: int64
>>> df = pd.DataFrame(np.arange(10).reshape(-1, 2), columns=['A', 'B']) # doctest: +SKIP >>> df # doctest: +SKIP A B 0 0 1 1 2 3 2 4 5 3 6 7 4 8 9 >>> m = df % 3 == 0 # doctest: +SKIP >>> df.where(m, -df) # doctest: +SKIP A B 0 0 -1 1 -2 3 2 -4 -5 3 6 -7 4 -8 9 >>> df.where(m, -df) == np.where(m, df, -df) # doctest: +SKIP A B 0 True True 1 True True 2 True True 3 True True 4 True True >>> df.where(m, -df) == df.mask(~m, -df) # doctest: +SKIP A B 0 True True 1 True True 2 True True 3 True True 4 True True
-
max
(axis=None, skipna=True, split_every=False, out=None)¶ Return the maximum of the values for the requested axis.
This docstring was copied from pandas.core.frame.DataFrame.max.
Some inconsistencies with the Dask version may exist.
If you want the index of the maximum, use
idxmax
. This is the equivalent of thenumpy.ndarray
methodargmax
.Parameters: axis : {index (0), columns (1)}
Axis for the function to be applied on.
skipna : bool, default True
Exclude NA/null values when computing the result.
level : int or level name, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.
numeric_only : bool, default None (Not supported in Dask)
Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.
**kwargs
Additional keyword arguments to be passed to the function.
Returns: Series or DataFrame (if level specified)
See also
Series.sum
- Return the sum.
Series.min
- Return the minimum.
Series.max
- Return the maximum.
Series.idxmin
- Return the index of the minimum.
Series.idxmax
- Return the index of the maximum.
DataFrame.sum
- Return the sum over the requested axis.
DataFrame.min
- Return the minimum over the requested axis.
DataFrame.max
- Return the maximum over the requested axis.
DataFrame.idxmin
- Return the index of the minimum over the requested axis.
DataFrame.idxmax
- Return the index of the maximum over the requested axis.
Examples
>>> idx = pd.MultiIndex.from_arrays([ # doctest: +SKIP ... ['warm', 'warm', 'cold', 'cold'], ... ['dog', 'falcon', 'fish', 'spider']], ... names=['blooded', 'animal']) >>> s = pd.Series([4, 2, 0, 8], name='legs', index=idx) # doctest: +SKIP >>> s # doctest: +SKIP blooded animal warm dog 4 falcon 2 cold fish 0 spider 8 Name: legs, dtype: int64
>>> s.max() # doctest: +SKIP 8
Max using level names, as well as indices.
>>> s.max(level='blooded') # doctest: +SKIP blooded warm 4 cold 8 Name: legs, dtype: int64
>>> s.max(level=0) # doctest: +SKIP blooded warm 4 cold 8 Name: legs, dtype: int64
-
mean
(axis=None, skipna=True, split_every=False, dtype=None, out=None)¶ Return the mean of the values for the requested axis.
This docstring was copied from pandas.core.frame.DataFrame.mean.
Some inconsistencies with the Dask version may exist.
Parameters: axis : {index (0), columns (1)}
Axis for the function to be applied on.
skipna : bool, default True
Exclude NA/null values when computing the result.
level : int or level name, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.
numeric_only : bool, default None (Not supported in Dask)
Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.
**kwargs
Additional keyword arguments to be passed to the function.
Returns: Series or DataFrame (if level specified)
-
memory_usage
(index=True, deep=False)¶ Return the memory usage of the Series.
This docstring was copied from pandas.core.series.Series.memory_usage.
Some inconsistencies with the Dask version may exist.
The memory usage can optionally include the contribution of the index and of elements of object dtype.
Parameters: index : bool, default True
Specifies whether to include the memory usage of the Series index.
deep : bool, default False
If True, introspect the data deeply by interrogating object dtypes for system-level memory consumption, and include it in the returned value.
Returns: int
Bytes of memory consumed.
See also
numpy.ndarray.nbytes
- Total bytes consumed by the elements of the array.
DataFrame.memory_usage
- Bytes consumed by a DataFrame.
Examples
>>> s = pd.Series(range(3)) # doctest: +SKIP >>> s.memory_usage() # doctest: +SKIP 152
Not including the index gives the size of the rest of the data, which is necessarily smaller:
>>> s.memory_usage(index=False) # doctest: +SKIP 24
The memory footprint of object values is ignored by default:
>>> s = pd.Series(["a", "b"]) # doctest: +SKIP >>> s.values # doctest: +SKIP array(['a', 'b'], dtype=object) >>> s.memory_usage() # doctest: +SKIP 144 >>> s.memory_usage(deep=True) # doctest: +SKIP 260
-
min
(axis=None, skipna=True, split_every=False, out=None)¶ Return the minimum of the values for the requested axis.
This docstring was copied from pandas.core.frame.DataFrame.min.
Some inconsistencies with the Dask version may exist.
If you want the index of the minimum, use
idxmin
. This is the equivalent of thenumpy.ndarray
methodargmin
.Parameters: axis : {index (0), columns (1)}
Axis for the function to be applied on.
skipna : bool, default True
Exclude NA/null values when computing the result.
level : int or level name, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.
numeric_only : bool, default None (Not supported in Dask)
Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.
**kwargs
Additional keyword arguments to be passed to the function.
Returns: Series or DataFrame (if level specified)
See also
Series.sum
- Return the sum.
Series.min
- Return the minimum.
Series.max
- Return the maximum.
Series.idxmin
- Return the index of the minimum.
Series.idxmax
- Return the index of the maximum.
DataFrame.sum
- Return the sum over the requested axis.
DataFrame.min
- Return the minimum over the requested axis.
DataFrame.max
- Return the maximum over the requested axis.
DataFrame.idxmin
- Return the index of the minimum over the requested axis.
DataFrame.idxmax
- Return the index of the maximum over the requested axis.
Examples
>>> idx = pd.MultiIndex.from_arrays([ # doctest: +SKIP ... ['warm', 'warm', 'cold', 'cold'], ... ['dog', 'falcon', 'fish', 'spider']], ... names=['blooded', 'animal']) >>> s = pd.Series([4, 2, 0, 8], name='legs', index=idx) # doctest: +SKIP >>> s # doctest: +SKIP blooded animal warm dog 4 falcon 2 cold fish 0 spider 8 Name: legs, dtype: int64
>>> s.min() # doctest: +SKIP 0
Min using level names, as well as indices.
>>> s.min(level='blooded') # doctest: +SKIP blooded warm 2 cold 0 Name: legs, dtype: int64
>>> s.min(level=0) # doctest: +SKIP blooded warm 2 cold 0 Name: legs, dtype: int64
-
mod
(other, level=None, fill_value=None, axis=0)¶ Return Modulo of series and other, element-wise (binary operator mod).
This docstring was copied from pandas.core.series.Series.mod.
Some inconsistencies with the Dask version may exist.
Equivalent to
series % other
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) # doctest: +SKIP >>> a # doctest: +SKIP a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) # doctest: +SKIP >>> b # doctest: +SKIP a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.mod(b, fill_value=0) # doctest: +SKIP a 0.0 b NaN c NaN d 0.0 e NaN dtype: float64
-
mul
(other, level=None, fill_value=None, axis=0)¶ Return Multiplication of series and other, element-wise (binary operator mul).
This docstring was copied from pandas.core.series.Series.mul.
Some inconsistencies with the Dask version may exist.
Equivalent to
series * other
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) # doctest: +SKIP >>> a # doctest: +SKIP a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) # doctest: +SKIP >>> b # doctest: +SKIP a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.multiply(b, fill_value=0) # doctest: +SKIP a 1.0 b 0.0 c 0.0 d 0.0 e NaN dtype: float64
-
nbytes
¶ Number of bytes
-
ndim
¶ Return dimensionality
-
ne
(other, level=None, fill_value=None, axis=0)¶ Return Not equal to of series and other, element-wise (binary operator ne).
This docstring was copied from pandas.core.series.Series.ne.
Some inconsistencies with the Dask version may exist.
Equivalent to
series != other
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Series.None
-
nlargest
(n=5, split_every=None)¶ Return the largest n elements.
This docstring was copied from pandas.core.series.Series.nlargest.
Some inconsistencies with the Dask version may exist.
Parameters: n : int, default 5
Return this many descending sorted values.
keep : {‘first’, ‘last’, ‘all’}, default ‘first’ (Not supported in Dask)
When there are duplicate values that cannot all fit in a Series of n elements:
first
: return the first n occurrences in orderof appearance.
last
: return the last n occurrences in reverseorder of appearance.
all
: keep all occurrences. This can result in a Series ofsize larger than n.
Returns: Series
The n largest values in the Series, sorted in decreasing order.
See also
Series.nsmallest
- Get the n smallest elements.
Series.sort_values
- Sort Series by values.
Series.head
- Return the first n rows.
Notes
Faster than
.sort_values(ascending=False).head(n)
for small n relative to the size of theSeries
object.Examples
>>> countries_population = {"Italy": 59000000, "France": 65000000, # doctest: +SKIP ... "Malta": 434000, "Maldives": 434000, ... "Brunei": 434000, "Iceland": 337000, ... "Nauru": 11300, "Tuvalu": 11300, ... "Anguilla": 11300, "Monserat": 5200} >>> s = pd.Series(countries_population) # doctest: +SKIP >>> s # doctest: +SKIP Italy 59000000 France 65000000 Malta 434000 Maldives 434000 Brunei 434000 Iceland 337000 Nauru 11300 Tuvalu 11300 Anguilla 11300 Monserat 5200 dtype: int64
The n largest elements where
n=5
by default.>>> s.nlargest() # doctest: +SKIP France 65000000 Italy 59000000 Malta 434000 Maldives 434000 Brunei 434000 dtype: int64
The n largest elements where
n=3
. Default keep value is ‘first’ so Malta will be kept.>>> s.nlargest(3) # doctest: +SKIP France 65000000 Italy 59000000 Malta 434000 dtype: int64
The n largest elements where
n=3
and keeping the last duplicates. Brunei will be kept since it is the last with value 434000 based on the index order.>>> s.nlargest(3, keep='last') # doctest: +SKIP France 65000000 Italy 59000000 Brunei 434000 dtype: int64
The n largest elements where
n=3
with all duplicates kept. Note that the returned Series has five elements due to the three duplicates.>>> s.nlargest(3, keep='all') # doctest: +SKIP France 65000000 Italy 59000000 Malta 434000 Maldives 434000 Brunei 434000 dtype: int64
-
notnull
()¶ Detect existing (non-missing) values.
This docstring was copied from pandas.core.frame.DataFrame.notnull.
Some inconsistencies with the Dask version may exist.
Return a boolean same-sized object indicating if the values are not NA. Non-missing values get mapped to True. Characters such as empty strings
''
ornumpy.inf
are not considered NA values (unless you setpandas.options.mode.use_inf_as_na = True
). NA values, such as None ornumpy.NaN
, get mapped to False values.Returns: DataFrame
Mask of bool values for each element in DataFrame that indicates whether an element is not an NA value.
See also
DataFrame.notnull
- Alias of notna.
DataFrame.isna
- Boolean inverse of notna.
DataFrame.dropna
- Omit axes labels with missing values.
notna
- Top-level notna.
Examples
Show which entries in a DataFrame are not NA.
>>> df = pd.DataFrame({'age': [5, 6, np.NaN], # doctest: +SKIP ... 'born': [pd.NaT, pd.Timestamp('1939-05-27'), ... pd.Timestamp('1940-04-25')], ... 'name': ['Alfred', 'Batman', ''], ... 'toy': [None, 'Batmobile', 'Joker']}) >>> df # doctest: +SKIP age born name toy 0 5.0 NaT Alfred None 1 6.0 1939-05-27 Batman Batmobile 2 NaN 1940-04-25 Joker
>>> df.notna() # doctest: +SKIP age born name toy 0 True False True False 1 True True True True 2 False True True True
Show which entries in a Series are not NA.
>>> ser = pd.Series([5, 6, np.NaN]) # doctest: +SKIP >>> ser # doctest: +SKIP 0 5.0 1 6.0 2 NaN dtype: float64
>>> ser.notna() # doctest: +SKIP 0 True 1 True 2 False dtype: bool
-
npartitions
¶ Return number of partitions
-
nsmallest
(n=5, split_every=None)¶ Return the smallest n elements.
This docstring was copied from pandas.core.series.Series.nsmallest.
Some inconsistencies with the Dask version may exist.
Parameters: n : int, default 5
Return this many ascending sorted values.
keep : {‘first’, ‘last’, ‘all’}, default ‘first’ (Not supported in Dask)
When there are duplicate values that cannot all fit in a Series of n elements:
first
: return the first n occurrences in orderof appearance.
last
: return the last n occurrences in reverseorder of appearance.
all
: keep all occurrences. This can result in a Series ofsize larger than n.
Returns: Series
The n smallest values in the Series, sorted in increasing order.
See also
Series.nlargest
- Get the n largest elements.
Series.sort_values
- Sort Series by values.
Series.head
- Return the first n rows.
Notes
Faster than
.sort_values().head(n)
for small n relative to the size of theSeries
object.Examples
>>> countries_population = {"Italy": 59000000, "France": 65000000, # doctest: +SKIP ... "Brunei": 434000, "Malta": 434000, ... "Maldives": 434000, "Iceland": 337000, ... "Nauru": 11300, "Tuvalu": 11300, ... "Anguilla": 11300, "Monserat": 5200} >>> s = pd.Series(countries_population) # doctest: +SKIP >>> s # doctest: +SKIP Italy 59000000 France 65000000 Brunei 434000 Malta 434000 Maldives 434000 Iceland 337000 Nauru 11300 Tuvalu 11300 Anguilla 11300 Monserat 5200 dtype: int64
The n smallest elements where
n=5
by default.>>> s.nsmallest() # doctest: +SKIP Monserat 5200 Nauru 11300 Tuvalu 11300 Anguilla 11300 Iceland 337000 dtype: int64
The n smallest elements where
n=3
. Default keep value is ‘first’ so Nauru and Tuvalu will be kept.>>> s.nsmallest(3) # doctest: +SKIP Monserat 5200 Nauru 11300 Tuvalu 11300 dtype: int64
The n smallest elements where
n=3
and keeping the last duplicates. Anguilla and Tuvalu will be kept since they are the last with value 11300 based on the index order.>>> s.nsmallest(3, keep='last') # doctest: +SKIP Monserat 5200 Anguilla 11300 Tuvalu 11300 dtype: int64
The n smallest elements where
n=3
with all duplicates kept. Note that the returned Series has four elements due to the three duplicates.>>> s.nsmallest(3, keep='all') # doctest: +SKIP Monserat 5200 Nauru 11300 Tuvalu 11300 Anguilla 11300 dtype: int64
-
nunique
(split_every=None)¶ Return number of unique elements in the object.
This docstring was copied from pandas.core.series.Series.nunique.
Some inconsistencies with the Dask version may exist.
Excludes NA values by default.
Parameters: dropna : bool, default True (Not supported in Dask)
Don’t include NaN in the count.
Returns: int
See also
DataFrame.nunique
- Method nunique for DataFrame.
Series.count
- Count non-NA/null observations in the Series.
Examples
>>> s = pd.Series([1, 3, 5, 7, 7]) # doctest: +SKIP >>> s # doctest: +SKIP 0 1 1 3 2 5 3 7 4 7 dtype: int64
>>> s.nunique() # doctest: +SKIP 4
-
nunique_approx
(split_every=None)¶ Approximate number of unique rows.
This method uses the HyperLogLog algorithm for cardinality estimation to compute the approximate number of unique rows. The approximate error is 0.406%.
Parameters: split_every : int, optional
Group partitions into groups of this size while performing a tree-reduction. If set to False, no tree-reduction will be used. Default is 8.
Returns: a float representing the approximate number of elements
-
partitions
¶ Slice dataframe by partitions
This allows partitionwise slicing of a Dask Dataframe. You can perform normal Numpy-style slicing but now rather than slice elements of the array you slice along partitions so, for example,
df.partitions[:5]
produces a new Dask Dataframe of the first five partitions.Returns: A Dask DataFrame Examples
>>> df.partitions[0] # doctest: +SKIP >>> df.partitions[:3] # doctest: +SKIP >>> df.partitions[::10] # doctest: +SKIP
-
persist
(**kwargs)¶ Persist this dask collection into memory
This turns a lazy Dask collection into a Dask collection with the same metadata, but now with the results fully computed or actively computing in the background.
The action of function differs significantly depending on the active task scheduler. If the task scheduler supports asynchronous computing, such as is the case of the dask.distributed scheduler, then persist will return immediately and the return value’s task graph will contain Dask Future objects. However if the task scheduler only supports blocking computation then the call to persist will block and the return value’s task graph will contain concrete Python results.
This function is particularly useful when using distributed systems, because the results will be kept in distributed memory, rather than returned to the local process as with compute.
Parameters: scheduler : string, optional
Which scheduler to use like “threads”, “synchronous” or “processes”. If not provided, the default is to check the global settings first, and then fall back to the collection defaults.
optimize_graph : bool, optional
If True [default], the graph is optimized before computation. Otherwise the graph is run as is. This can be useful for debugging.
**kwargs
Extra keywords to forward to the scheduler function.
Returns: New dask collections backed by in-memory data
See also
dask.base.persist
-
pipe
(func, *args, **kwargs)¶ Apply func(self, *args, **kwargs).
This docstring was copied from pandas.core.frame.DataFrame.pipe.
Some inconsistencies with the Dask version may exist.
Parameters: func : function
function to apply to the Series/DataFrame.
args
, andkwargs
are passed intofunc
. Alternatively a(callable, data_keyword)
tuple wheredata_keyword
is a string indicating the keyword ofcallable
that expects the Series/DataFrame.args : iterable, optional
positional arguments passed into
func
.kwargs : mapping, optional
a dictionary of keyword arguments passed into
func
.Returns: object : the return type of
func
.See also
Notes
Use
.pipe
when chaining together functions that expect Series, DataFrames or GroupBy objects. Instead of writing>>> f(g(h(df), arg1=a), arg2=b, arg3=c) # doctest: +SKIP
You can write
>>> (df.pipe(h) # doctest: +SKIP ... .pipe(g, arg1=a) ... .pipe(f, arg2=b, arg3=c) ... )
If you have a function that takes the data as (say) the second argument, pass a tuple indicating which keyword expects the data. For example, suppose
f
takes its data asarg2
:>>> (df.pipe(h) # doctest: +SKIP ... .pipe(g, arg1=a) ... .pipe((f, 'arg2'), arg1=a, arg3=c) ... )
-
pow
(other, level=None, fill_value=None, axis=0)¶ Return Exponential power of series and other, element-wise (binary operator pow).
This docstring was copied from pandas.core.series.Series.pow.
Some inconsistencies with the Dask version may exist.
Equivalent to
series ** other
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) # doctest: +SKIP >>> a # doctest: +SKIP a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) # doctest: +SKIP >>> b # doctest: +SKIP a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.pow(b, fill_value=0) # doctest: +SKIP a 1.0 b 1.0 c 1.0 d 0.0 e NaN dtype: float64
-
prod
(axis=None, skipna=True, split_every=False, dtype=None, out=None, min_count=None)¶ Return the product of the values for the requested axis.
This docstring was copied from pandas.core.frame.DataFrame.prod.
Some inconsistencies with the Dask version may exist.
Parameters: axis : {index (0), columns (1)}
Axis for the function to be applied on.
skipna : bool, default True
Exclude NA/null values when computing the result.
level : int or level name, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.
numeric_only : bool, default None (Not supported in Dask)
Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.
min_count : int, default 0
The required number of valid values to perform the operation. If fewer than
min_count
non-NA values are present the result will be NA.New in version 0.22.0: Added with the default being 0. This means the sum of an all-NA or empty Series is 0, and the product of an all-NA or empty Series is 1.
**kwargs
Additional keyword arguments to be passed to the function.
Returns: Series or DataFrame (if level specified)
Examples
By default, the product of an empty or all-NA Series is
1
>>> pd.Series([]).prod() # doctest: +SKIP 1.0
This can be controlled with the
min_count
parameter>>> pd.Series([]).prod(min_count=1) # doctest: +SKIP nan
Thanks to the
skipna
parameter,min_count
handles all-NA and empty series identically.>>> pd.Series([np.nan]).prod() # doctest: +SKIP 1.0
>>> pd.Series([np.nan]).prod(min_count=1) # doctest: +SKIP nan
-
quantile
(q=0.5, method='default')¶ Approximate quantiles of Series
Parameters: q : list/array of floats, default 0.5 (50%)
Iterable of numbers ranging from 0 to 1 for the desired quantiles
method : {‘default’, ‘tdigest’, ‘dask’}, optional
What method to use. By default will use dask’s internal custom algorithm (
'dask'
). If set to'tdigest'
will use tdigest for floats and ints and fallback to the'dask'
otherwise.
-
radd
(other, level=None, fill_value=None, axis=0)¶ Return Addition of series and other, element-wise (binary operator radd).
This docstring was copied from pandas.core.series.Series.radd.
Some inconsistencies with the Dask version may exist.
Equivalent to
other + series
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) # doctest: +SKIP >>> a # doctest: +SKIP a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) # doctest: +SKIP >>> b # doctest: +SKIP a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.add(b, fill_value=0) # doctest: +SKIP a 2.0 b 1.0 c 1.0 d 1.0 e NaN dtype: float64
-
random_split
(frac, random_state=None)¶ Pseudorandomly split dataframe into different pieces row-wise
Parameters: frac : list
List of floats that should sum to one.
random_state: int or np.random.RandomState
If int create a new RandomState with this as the seed
Otherwise draw from the passed RandomState
See also
dask.DataFrame.sample
Examples
50/50 split
>>> a, b = df.random_split([0.5, 0.5]) # doctest: +SKIP
80/10/10 split, consistent random_state
>>> a, b, c = df.random_split([0.8, 0.1, 0.1], random_state=123) # doctest: +SKIP
-
rdiv
(other, level=None, fill_value=None, axis=0)¶ Return Floating division of series and other, element-wise (binary operator rtruediv).
This docstring was copied from pandas.core.series.Series.rdiv.
Some inconsistencies with the Dask version may exist.
Equivalent to
other / series
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) # doctest: +SKIP >>> a # doctest: +SKIP a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) # doctest: +SKIP >>> b # doctest: +SKIP a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.divide(b, fill_value=0) # doctest: +SKIP a 1.0 b inf c inf d 0.0 e NaN dtype: float64
-
reduction
(chunk, aggregate=None, combine=None, meta='__no_default__', token=None, split_every=None, chunk_kwargs=None, aggregate_kwargs=None, combine_kwargs=None, **kwargs)¶ Generic row-wise reductions.
Parameters: chunk : callable
Function to operate on each partition. Should return a
pandas.DataFrame
,pandas.Series
, or a scalar.aggregate : callable, optional
Function to operate on the concatenated result of
chunk
. If not specified, defaults tochunk
. Used to do the final aggregation in a tree reduction.The input to
aggregate
depends on the output ofchunk
. If the output ofchunk
is a:- scalar: Input is a Series, with one row per partition.
- Series: Input is a DataFrame, with one row per partition. Columns are the rows in the output series.
- DataFrame: Input is a DataFrame, with one row per partition. Columns are the columns in the output dataframes.
Should return a
pandas.DataFrame
,pandas.Series
, or a scalar.combine : callable, optional
Function to operate on intermediate concatenated results of
chunk
in a tree-reduction. If not provided, defaults toaggregate
. The input/output requirements should match that ofaggregate
described above.meta : pd.DataFrame, pd.Series, dict, iterable, tuple, optional
An empty
pd.DataFrame
orpd.Series
that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of aDataFrame
, adict
of{name: dtype}
or iterable of(name, dtype)
can be provided (note that the order of the names should match the order of the columns). Instead of a series, a tuple of(name, dtype)
can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providingmeta
is recommended. For more information, seedask.dataframe.utils.make_meta
.token : str, optional
The name to use for the output keys.
split_every : int, optional
Group partitions into groups of this size while performing a tree-reduction. If set to False, no tree-reduction will be used, and all intermediates will be concatenated and passed to
aggregate
. Default is 8.chunk_kwargs : dict, optional
Keyword arguments to pass on to
chunk
only.aggregate_kwargs : dict, optional
Keyword arguments to pass on to
aggregate
only.combine_kwargs : dict, optional
Keyword arguments to pass on to
combine
only.kwargs :
All remaining keywords will be passed to
chunk
,combine
, andaggregate
.Examples
>>> import pandas as pd >>> import dask.dataframe as dd >>> df = pd.DataFrame({'x': range(50), 'y': range(50, 100)}) >>> ddf = dd.from_pandas(df, npartitions=4)
Count the number of rows in a DataFrame. To do this, count the number of rows in each partition, then sum the results:
>>> res = ddf.reduction(lambda x: x.count(), ... aggregate=lambda x: x.sum()) >>> res.compute() x 50 y 50 dtype: int64
Count the number of rows in a Series with elements greater than or equal to a value (provided via a keyword).
>>> def count_greater(x, value=0): ... return (x >= value).sum() >>> res = ddf.x.reduction(count_greater, aggregate=lambda x: x.sum(), ... chunk_kwargs={'value': 25}) >>> res.compute() 25
Aggregate both the sum and count of a Series at the same time:
>>> def sum_and_count(x): ... return pd.Series({'count': x.count(), 'sum': x.sum()}, ... index=['count', 'sum']) >>> res = ddf.x.reduction(sum_and_count, aggregate=lambda x: x.sum()) >>> res.compute() count 50 sum 1225 dtype: int64
Doing the same, but for a DataFrame. Here
chunk
returns a DataFrame, meaning the input toaggregate
is a DataFrame with an index with non-unique entries for both ‘x’ and ‘y’. We groupby the index, and sum each group to get the final result.>>> def sum_and_count(x): ... return pd.DataFrame({'count': x.count(), 'sum': x.sum()}, ... columns=['count', 'sum']) >>> res = ddf.reduction(sum_and_count, ... aggregate=lambda x: x.groupby(level=0).sum()) >>> res.compute() count sum x 50 1225 y 50 3725
-
rename
(index=None, inplace=False, sorted_index=False)¶ Alter Series index labels or name
Function / dict values must be unique (1-to-1). Labels not contained in a dict / Series will be left as-is. Extra labels listed don’t throw an error.
Alternatively, change
Series.name
with a scalar value.Parameters: index : scalar, hashable sequence, dict-like or callable, optional
If dict-like or callable, the transformation is applied to the index. Scalar or hashable sequence-like will alter the
Series.name
attribute.inplace : boolean, default False
Whether to return a new Series or modify this one inplace.
sorted_index : bool, default False
If true, the output
Series
will have known divisions inferred from the input series and the transformation. Ignored for non-callable/dict-likeindex
or when the input series has unknown divisions. Note that this may only be set toTrue
if you know that the transformed index is monotonicly increasing. Dask will check that transformed divisions are monotonic, but cannot check all the values between divisions, so incorrectly setting this can result in bugs.Returns: renamed : Series
See also
-
repartition
(divisions=None, npartitions=None, partition_size=None, freq=None, force=False)¶ Repartition dataframe along new divisions
Parameters: divisions : list, optional
List of partitions to be used. Only used if npartitions and partition_size isn’t specified.
npartitions : int, optional
Number of partitions of output. Only used if partition_size isn’t specified.
partition_size: int or string, optional
Max number of bytes of memory for each partition. Use numbers or strings like 5MB. If specified npartitions and divisions will be ignored.
Warning
This keyword argument triggers computation to determine the memory size of each partition, which may be expensive.
freq : str, pd.Timedelta
A period on which to partition timeseries data like
'7D'
or'12h'
orpd.Timedelta(hours=12)
. Assumes a datetime index.force : bool, default False
Allows the expansion of the existing divisions. If False then the new divisions lower and upper bounds must be the same as the old divisions.
Notes
Exactly one of divisions, npartitions, partition_size, or freq should be specified. A
ValueError
will be raised when that is not the case.Examples
>>> df = df.repartition(npartitions=10) # doctest: +SKIP >>> df = df.repartition(divisions=[0, 5, 10, 20]) # doctest: +SKIP >>> df = df.repartition(freq='7d') # doctest: +SKIP
-
replace
(to_replace=None, value=None, regex=False)¶ Replace values given in to_replace with value.
This docstring was copied from pandas.core.frame.DataFrame.replace.
Some inconsistencies with the Dask version may exist.
Values of the DataFrame are replaced with other values dynamically. This differs from updating with
.loc
or.iloc
, which require you to specify a location to update with some value.Parameters: to_replace : str, regex, list, dict, Series, int, float, or None
How to find the values that will be replaced.
numeric, str or regex:
- numeric: numeric values equal to to_replace will be replaced with value
- str: string exactly matching to_replace will be replaced with value
- regex: regexs matching to_replace will be replaced with value
list of str, regex, or numeric:
- First, if to_replace and value are both lists, they must be the same length.
- Second, if
regex=True
then all of the strings in both lists will be interpreted as regexs otherwise they will match directly. This doesn’t matter much for value since there are only a few possible substitution regexes you can use. - str, regex and numeric rules apply as above.
dict:
- Dicts can be used to specify different replacement values
for different existing values. For example,
{'a': 'b', 'y': 'z'}
replaces the value ‘a’ with ‘b’ and ‘y’ with ‘z’. To use a dict in this way the value parameter should be None. - For a DataFrame a dict can specify that different values
should be replaced in different columns. For example,
{'a': 1, 'b': 'z'}
looks for the value 1 in column ‘a’ and the value ‘z’ in column ‘b’ and replaces these values with whatever is specified in value. The value parameter should not beNone
in this case. You can treat this as a special case of passing two lists except that you are specifying the column to search in. - For a DataFrame nested dictionaries, e.g.,
{'a': {'b': np.nan}}
, are read as follows: look in column ‘a’ for the value ‘b’ and replace it with NaN. The value parameter should beNone
to use a nested dict in this way. You can nest regular expressions as well. Note that column names (the top-level dictionary keys in a nested dictionary) cannot be regular expressions.
- Dicts can be used to specify different replacement values
for different existing values. For example,
None:
- This means that the regex argument must be a string,
compiled regular expression, or list, dict, ndarray or
Series of such elements. If value is also
None
then this must be a nested dictionary or Series.
- This means that the regex argument must be a string,
compiled regular expression, or list, dict, ndarray or
Series of such elements. If value is also
See the examples section for examples of each of these.
value : scalar, dict, list, str, regex, default None
Value to replace any values matching to_replace with. For a DataFrame a dict of values can be used to specify which value to use for each column (columns not in the dict will not be filled). Regular expressions, strings and lists or dicts of such objects are also allowed.
inplace : bool, default False (Not supported in Dask)
If True, in place. Note: this will modify any other views on this object (e.g. a column from a DataFrame). Returns the caller if this is True.
limit : int, default None (Not supported in Dask)
Maximum size gap to forward or backward fill.
regex : bool or same types as to_replace, default False
Whether to interpret to_replace and/or value as regular expressions. If this is
True
then to_replace must be a string. Alternatively, this could be a regular expression or a list, dict, or array of regular expressions in which case to_replace must beNone
.method : {‘pad’, ‘ffill’, ‘bfill’, None} (Not supported in Dask)
The method to use when for replacement, when to_replace is a scalar, list or tuple and value is
None
.Changed in version 0.23.0: Added to DataFrame.
Returns: DataFrame
Object after replacement.
Raises: AssertionError
- If regex is not a
bool
and to_replace is notNone
.
TypeError
- If to_replace is a
dict
and value is not alist
,dict
,ndarray
, orSeries
- If to_replace is
None
and regex is not compilable into a regular expression or is a list, dict, ndarray, or Series. - When replacing multiple
bool
ordatetime64
objects and the arguments to to_replace does not match the type of the value being replaced
ValueError
- If a
list
or anndarray
is passed to to_replace and value but they are not the same length.
See also
DataFrame.fillna
- Fill NA values.
DataFrame.where
- Replace values based on boolean condition.
Series.str.replace
- Simple string replacement.
Notes
- Regex substitution is performed under the hood with
re.sub
. The rules for substitution forre.sub
are the same. - Regular expressions will only substitute on strings, meaning you cannot provide, for example, a regular expression matching floating point numbers and expect the columns in your frame that have a numeric dtype to be matched. However, if those floating point numbers are strings, then you can do this.
- This method has a lot of options. You are encouraged to experiment and play with this method to gain intuition about how it works.
- When dict is used as the to_replace value, it is like key(s) in the dict are the to_replace part and value(s) in the dict are the value parameter.
Examples
Scalar `to_replace` and `value`
>>> s = pd.Series([0, 1, 2, 3, 4]) # doctest: +SKIP >>> s.replace(0, 5) # doctest: +SKIP 0 5 1 1 2 2 3 3 4 4 dtype: int64
>>> df = pd.DataFrame({'A': [0, 1, 2, 3, 4], # doctest: +SKIP ... 'B': [5, 6, 7, 8, 9], ... 'C': ['a', 'b', 'c', 'd', 'e']}) >>> df.replace(0, 5) # doctest: +SKIP A B C 0 5 5 a 1 1 6 b 2 2 7 c 3 3 8 d 4 4 9 e
List-like `to_replace`
>>> df.replace([0, 1, 2, 3], 4) # doctest: +SKIP A B C 0 4 5 a 1 4 6 b 2 4 7 c 3 4 8 d 4 4 9 e
>>> df.replace([0, 1, 2, 3], [4, 3, 2, 1]) # doctest: +SKIP A B C 0 4 5 a 1 3 6 b 2 2 7 c 3 1 8 d 4 4 9 e
>>> s.replace([1, 2], method='bfill') # doctest: +SKIP 0 0 1 3 2 3 3 3 4 4 dtype: int64
dict-like `to_replace`
>>> df.replace({0: 10, 1: 100}) # doctest: +SKIP A B C 0 10 5 a 1 100 6 b 2 2 7 c 3 3 8 d 4 4 9 e
>>> df.replace({'A': 0, 'B': 5}, 100) # doctest: +SKIP A B C 0 100 100 a 1 1 6 b 2 2 7 c 3 3 8 d 4 4 9 e
>>> df.replace({'A': {0: 100, 4: 400}}) # doctest: +SKIP A B C 0 100 5 a 1 1 6 b 2 2 7 c 3 3 8 d 4 400 9 e
Regular expression `to_replace`
>>> df = pd.DataFrame({'A': ['bat', 'foo', 'bait'], # doctest: +SKIP ... 'B': ['abc', 'bar', 'xyz']}) >>> df.replace(to_replace=r'^ba.$', value='new', regex=True) # doctest: +SKIP A B 0 new abc 1 foo new 2 bait xyz
>>> df.replace({'A': r'^ba.$'}, {'A': 'new'}, regex=True) # doctest: +SKIP A B 0 new abc 1 foo bar 2 bait xyz
>>> df.replace(regex=r'^ba.$', value='new') # doctest: +SKIP A B 0 new abc 1 foo new 2 bait xyz
>>> df.replace(regex={r'^ba.$': 'new', 'foo': 'xyz'}) # doctest: +SKIP A B 0 new abc 1 xyz new 2 bait xyz
>>> df.replace(regex=[r'^ba.$', 'foo'], value='new') # doctest: +SKIP A B 0 new abc 1 new new 2 bait xyz
Note that when replacing multiple
bool
ordatetime64
objects, the data types in the to_replace parameter must match the data type of the value being replaced:>>> df = pd.DataFrame({'A': [True, False, True], # doctest: +SKIP ... 'B': [False, True, False]}) >>> df.replace({'a string': 'new value', True: False}) # raises # doctest: +SKIP Traceback (most recent call last): ... TypeError: Cannot compare types 'ndarray(dtype=bool)' and 'str'
This raises a
TypeError
because one of thedict
keys is not of the correct type for replacement.Compare the behavior of
s.replace({'a': None})
ands.replace('a', None)
to understand the peculiarities of the to_replace parameter:>>> s = pd.Series([10, 'a', 'a', 'b', 'a']) # doctest: +SKIP
When one uses a dict as the to_replace value, it is like the value(s) in the dict are equal to the value parameter.
s.replace({'a': None})
is equivalent tos.replace(to_replace={'a': None}, value=None, method=None)
:>>> s.replace({'a': None}) # doctest: +SKIP 0 10 1 None 2 None 3 b 4 None dtype: object
When
value=None
and to_replace is a scalar, list or tuple, replace uses the method parameter (default ‘pad’) to do the replacement. So this is why the ‘a’ values are being replaced by 10 in rows 1 and 2 and ‘b’ in row 4 in this case. The commands.replace('a', None)
is actually equivalent tos.replace(to_replace='a', value=None, method='pad')
:>>> s.replace('a', None) # doctest: +SKIP 0 10 1 10 2 10 3 b 4 b dtype: object
-
resample
(rule, closed=None, label=None)¶ Resample time-series data.
This docstring was copied from pandas.core.frame.DataFrame.resample.
Some inconsistencies with the Dask version may exist.
Convenience method for frequency conversion and resampling of time series. Object must have a datetime-like index (DatetimeIndex, PeriodIndex, or TimedeltaIndex), or pass datetime-like values to the on or level keyword.
Parameters: rule : DateOffset, Timedelta or str
The offset string or object representing target conversion.
how : str (Not supported in Dask)
Method for down/re-sampling, default to ‘mean’ for downsampling.
Deprecated since version 0.18.0: The new syntax is
.resample(...).mean()
, or.resample(...).apply(<func>)
axis : {0 or ‘index’, 1 or ‘columns’}, default 0 (Not supported in Dask)
Which axis to use for up- or down-sampling. For Series this will default to 0, i.e. along the rows. Must be DatetimeIndex, TimedeltaIndex or PeriodIndex.
fill_method : str, default None (Not supported in Dask)
Filling method for upsampling.
Deprecated since version 0.18.0: The new syntax is
.resample(...).<func>()
, e.g..resample(...).pad()
closed : {‘right’, ‘left’}, default None
Which side of bin interval is closed. The default is ‘left’ for all frequency offsets except for ‘M’, ‘A’, ‘Q’, ‘BM’, ‘BA’, ‘BQ’, and ‘W’ which all have a default of ‘right’.
label : {‘right’, ‘left’}, default None
Which bin edge label to label bucket with. The default is ‘left’ for all frequency offsets except for ‘M’, ‘A’, ‘Q’, ‘BM’, ‘BA’, ‘BQ’, and ‘W’ which all have a default of ‘right’.
convention : {‘start’, ‘end’, ‘s’, ‘e’}, default ‘start’ (Not supported in Dask)
For PeriodIndex only, controls whether to use the start or end of rule.
kind : {‘timestamp’, ‘period’}, optional, default None (Not supported in Dask)
Pass ‘timestamp’ to convert the resulting index to a DateTimeIndex or ‘period’ to convert it to a PeriodIndex. By default the input representation is retained.
loffset : timedelta, default None (Not supported in Dask)
Adjust the resampled time labels.
limit : int, default None (Not supported in Dask)
Maximum size gap when reindexing with fill_method.
Deprecated since version 0.18.0.
base : int, default 0 (Not supported in Dask)
For frequencies that evenly subdivide 1 day, the “origin” of the aggregated intervals. For example, for ‘5min’ frequency, base could range from 0 through 4. Defaults to 0.
on : str, optional (Not supported in Dask)
For a DataFrame, column to use instead of index for resampling. Column must be datetime-like.
New in version 0.19.0.
level : str or int, optional (Not supported in Dask)
For a MultiIndex, level (name or number) to use for resampling. level must be datetime-like.
New in version 0.19.0.
Returns: Resampler object
See also
groupby
- Group by mapping, function, label, or list of labels.
Series.resample
- Resample a Series.
DataFrame.resample
- Resample a DataFrame.
Notes
See the user guide for more.
To learn more about the offset strings, please see this link.
Examples
Start by creating a series with 9 one minute timestamps.
>>> index = pd.date_range('1/1/2000', periods=9, freq='T') # doctest: +SKIP >>> series = pd.Series(range(9), index=index) # doctest: +SKIP >>> series # doctest: +SKIP 2000-01-01 00:00:00 0 2000-01-01 00:01:00 1 2000-01-01 00:02:00 2 2000-01-01 00:03:00 3 2000-01-01 00:04:00 4 2000-01-01 00:05:00 5 2000-01-01 00:06:00 6 2000-01-01 00:07:00 7 2000-01-01 00:08:00 8 Freq: T, dtype: int64
Downsample the series into 3 minute bins and sum the values of the timestamps falling into a bin.
>>> series.resample('3T').sum() # doctest: +SKIP 2000-01-01 00:00:00 3 2000-01-01 00:03:00 12 2000-01-01 00:06:00 21 Freq: 3T, dtype: int64
Downsample the series into 3 minute bins as above, but label each bin using the right edge instead of the left. Please note that the value in the bucket used as the label is not included in the bucket, which it labels. For example, in the original series the bucket
2000-01-01 00:03:00
contains the value 3, but the summed value in the resampled bucket with the label2000-01-01 00:03:00
does not include 3 (if it did, the summed value would be 6, not 3). To include this value close the right side of the bin interval as illustrated in the example below this one.>>> series.resample('3T', label='right').sum() # doctest: +SKIP 2000-01-01 00:03:00 3 2000-01-01 00:06:00 12 2000-01-01 00:09:00 21 Freq: 3T, dtype: int64
Downsample the series into 3 minute bins as above, but close the right side of the bin interval.
>>> series.resample('3T', label='right', closed='right').sum() # doctest: +SKIP 2000-01-01 00:00:00 0 2000-01-01 00:03:00 6 2000-01-01 00:06:00 15 2000-01-01 00:09:00 15 Freq: 3T, dtype: int64
Upsample the series into 30 second bins.
>>> series.resample('30S').asfreq()[0:5] # Select first 5 rows # doctest: +SKIP 2000-01-01 00:00:00 0.0 2000-01-01 00:00:30 NaN 2000-01-01 00:01:00 1.0 2000-01-01 00:01:30 NaN 2000-01-01 00:02:00 2.0 Freq: 30S, dtype: float64
Upsample the series into 30 second bins and fill the
NaN
values using thepad
method.>>> series.resample('30S').pad()[0:5] # doctest: +SKIP 2000-01-01 00:00:00 0 2000-01-01 00:00:30 0 2000-01-01 00:01:00 1 2000-01-01 00:01:30 1 2000-01-01 00:02:00 2 Freq: 30S, dtype: int64
Upsample the series into 30 second bins and fill the
NaN
values using thebfill
method.>>> series.resample('30S').bfill()[0:5] # doctest: +SKIP 2000-01-01 00:00:00 0 2000-01-01 00:00:30 1 2000-01-01 00:01:00 1 2000-01-01 00:01:30 2 2000-01-01 00:02:00 2 Freq: 30S, dtype: int64
Pass a custom function via
apply
>>> def custom_resampler(array_like): # doctest: +SKIP ... return np.sum(array_like) + 5 ... >>> series.resample('3T').apply(custom_resampler) # doctest: +SKIP 2000-01-01 00:00:00 8 2000-01-01 00:03:00 17 2000-01-01 00:06:00 26 Freq: 3T, dtype: int64
For a Series with a PeriodIndex, the keyword convention can be used to control whether to use the start or end of rule.
Resample a year by quarter using ‘start’ convention. Values are assigned to the first quarter of the period.
>>> s = pd.Series([1, 2], index=pd.period_range('2012-01-01', # doctest: +SKIP ... freq='A', ... periods=2)) >>> s # doctest: +SKIP 2012 1 2013 2 Freq: A-DEC, dtype: int64 >>> s.resample('Q', convention='start').asfreq() # doctest: +SKIP 2012Q1 1.0 2012Q2 NaN 2012Q3 NaN 2012Q4 NaN 2013Q1 2.0 2013Q2 NaN 2013Q3 NaN 2013Q4 NaN Freq: Q-DEC, dtype: float64
Resample quarters by month using ‘end’ convention. Values are assigned to the last month of the period.
>>> q = pd.Series([1, 2, 3, 4], index=pd.period_range('2018-01-01', # doctest: +SKIP ... freq='Q', ... periods=4)) >>> q # doctest: +SKIP 2018Q1 1 2018Q2 2 2018Q3 3 2018Q4 4 Freq: Q-DEC, dtype: int64 >>> q.resample('M', convention='end').asfreq() # doctest: +SKIP 2018-03 1.0 2018-04 NaN 2018-05 NaN 2018-06 2.0 2018-07 NaN 2018-08 NaN 2018-09 3.0 2018-10 NaN 2018-11 NaN 2018-12 4.0 Freq: M, dtype: float64
For DataFrame objects, the keyword on can be used to specify the column instead of the index for resampling.
>>> d = dict({'price': [10, 11, 9, 13, 14, 18, 17, 19], # doctest: +SKIP ... 'volume': [50, 60, 40, 100, 50, 100, 40, 50]}) >>> df = pd.DataFrame(d) # doctest: +SKIP >>> df['week_starting'] = pd.date_range('01/01/2018', # doctest: +SKIP ... periods=8, ... freq='W') >>> df # doctest: +SKIP price volume week_starting 0 10 50 2018-01-07 1 11 60 2018-01-14 2 9 40 2018-01-21 3 13 100 2018-01-28 4 14 50 2018-02-04 5 18 100 2018-02-11 6 17 40 2018-02-18 7 19 50 2018-02-25 >>> df.resample('M', on='week_starting').mean() # doctest: +SKIP price volume week_starting 2018-01-31 10.75 62.5 2018-02-28 17.00 60.0
For a DataFrame with MultiIndex, the keyword level can be used to specify on which level the resampling needs to take place.
>>> days = pd.date_range('1/1/2000', periods=4, freq='D') # doctest: +SKIP >>> d2 = dict({'price': [10, 11, 9, 13, 14, 18, 17, 19], # doctest: +SKIP ... 'volume': [50, 60, 40, 100, 50, 100, 40, 50]}) >>> df2 = pd.DataFrame(d2, # doctest: +SKIP ... index=pd.MultiIndex.from_product([days, ... ['morning', ... 'afternoon']] ... )) >>> df2 # doctest: +SKIP price volume 2000-01-01 morning 10 50 afternoon 11 60 2000-01-02 morning 9 40 afternoon 13 100 2000-01-03 morning 14 50 afternoon 18 100 2000-01-04 morning 17 40 afternoon 19 50 >>> df2.resample('D', level=0).sum() # doctest: +SKIP price volume 2000-01-01 21 110 2000-01-02 22 140 2000-01-03 32 150 2000-01-04 36 90
-
reset_index
(drop=False)¶ Reset the index to the default index.
Note that unlike in
pandas
, the resetdask.dataframe
index will not be monotonically increasing from 0. Instead, it will restart at 0 for each partition (e.g.index1 = [0, ..., 10], index2 = [0, ...]
). This is due to the inability to statically know the full length of the index.For DataFrame with multi-level index, returns a new DataFrame with labeling information in the columns under the index names, defaulting to ‘level_0’, ‘level_1’, etc. if any are None. For a standard index, the index name will be used (if set), otherwise a default ‘index’ or ‘level_0’ (if ‘index’ is already taken) will be used.
Parameters: drop : boolean, default False
Do not try to insert index into dataframe columns.
-
rfloordiv
(other, level=None, fill_value=None, axis=0)¶ Return Integer division of series and other, element-wise (binary operator rfloordiv).
This docstring was copied from pandas.core.series.Series.rfloordiv.
Some inconsistencies with the Dask version may exist.
Equivalent to
other // series
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) # doctest: +SKIP >>> a # doctest: +SKIP a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) # doctest: +SKIP >>> b # doctest: +SKIP a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.floordiv(b, fill_value=0) # doctest: +SKIP a 1.0 b NaN c NaN d 0.0 e NaN dtype: float64
-
rmod
(other, level=None, fill_value=None, axis=0)¶ Return Modulo of series and other, element-wise (binary operator rmod).
This docstring was copied from pandas.core.series.Series.rmod.
Some inconsistencies with the Dask version may exist.
Equivalent to
other % series
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) # doctest: +SKIP >>> a # doctest: +SKIP a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) # doctest: +SKIP >>> b # doctest: +SKIP a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.mod(b, fill_value=0) # doctest: +SKIP a 0.0 b NaN c NaN d 0.0 e NaN dtype: float64
-
rmul
(other, level=None, fill_value=None, axis=0)¶ Return Multiplication of series and other, element-wise (binary operator rmul).
This docstring was copied from pandas.core.series.Series.rmul.
Some inconsistencies with the Dask version may exist.
Equivalent to
other * series
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) # doctest: +SKIP >>> a # doctest: +SKIP a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) # doctest: +SKIP >>> b # doctest: +SKIP a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.multiply(b, fill_value=0) # doctest: +SKIP a 1.0 b 0.0 c 0.0 d 0.0 e NaN dtype: float64
-
rolling
(window, min_periods=None, center=False, win_type=None, axis=0)¶ Provides rolling transformations.
Parameters: window : int, str, offset
Size of the moving window. This is the number of observations used for calculating the statistic. When not using a
DatetimeIndex
, the window size must not be so large as to span more than one adjacent partition. If using an offset or offset alias like ‘5D’, the data must have aDatetimeIndex
Changed in version 0.15.0: Now accepts offsets and string offset aliases
min_periods : int, default None
Minimum number of observations in window required to have a value (otherwise result is NA).
center : boolean, default False
Set the labels at the center of the window.
win_type : string, default None
Provide a window type. The recognized window types are identical to pandas.
axis : int, default 0
Returns: a Rolling object on which to call a method to compute a statistic
-
round
(decimals=0)¶ Round each value in a Series to the given number of decimals.
This docstring was copied from pandas.core.series.Series.round.
Some inconsistencies with the Dask version may exist.
Parameters: decimals : int
Number of decimal places to round to (default: 0). If decimals is negative, it specifies the number of positions to the left of the decimal point.
Returns: Series
Rounded values of the Series.
See also
numpy.around
- Round values of an np.array.
DataFrame.round
- Round values of a DataFrame.
Examples
>>> s = pd.Series([0.1, 1.3, 2.7]) # doctest: +SKIP >>> s.round() # doctest: +SKIP 0 0.0 1 1.0 2 3.0 dtype: float64
-
rpow
(other, level=None, fill_value=None, axis=0)¶ Return Exponential power of series and other, element-wise (binary operator rpow).
This docstring was copied from pandas.core.series.Series.rpow.
Some inconsistencies with the Dask version may exist.
Equivalent to
other ** series
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) # doctest: +SKIP >>> a # doctest: +SKIP a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) # doctest: +SKIP >>> b # doctest: +SKIP a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.pow(b, fill_value=0) # doctest: +SKIP a 1.0 b 1.0 c 1.0 d 0.0 e NaN dtype: float64
-
rsub
(other, level=None, fill_value=None, axis=0)¶ Return Subtraction of series and other, element-wise (binary operator rsub).
This docstring was copied from pandas.core.series.Series.rsub.
Some inconsistencies with the Dask version may exist.
Equivalent to
other - series
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) # doctest: +SKIP >>> a # doctest: +SKIP a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) # doctest: +SKIP >>> b # doctest: +SKIP a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.subtract(b, fill_value=0) # doctest: +SKIP a 0.0 b 1.0 c 1.0 d -1.0 e NaN dtype: float64
-
rtruediv
(other, level=None, fill_value=None, axis=0)¶ Return Floating division of series and other, element-wise (binary operator rtruediv).
This docstring was copied from pandas.core.series.Series.rtruediv.
Some inconsistencies with the Dask version may exist.
Equivalent to
other / series
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) # doctest: +SKIP >>> a # doctest: +SKIP a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) # doctest: +SKIP >>> b # doctest: +SKIP a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.divide(b, fill_value=0) # doctest: +SKIP a 1.0 b inf c inf d 0.0 e NaN dtype: float64
-
sample
(n=None, frac=None, replace=False, random_state=None)¶ Random sample of items
Parameters: n : int, optional
Number of items to return is not supported by dask. Use frac instead.
frac : float, optional
Fraction of axis items to return.
replace : boolean, optional
Sample with or without replacement. Default = False.
random_state : int or
np.random.RandomState
If int we create a new RandomState with this as the seed Otherwise we draw from the passed RandomState
-
sem
(axis=None, skipna=None, ddof=1, split_every=False)¶ Return unbiased standard error of the mean over requested axis.
This docstring was copied from pandas.core.frame.DataFrame.sem.
Some inconsistencies with the Dask version may exist.
Normalized by N-1 by default. This can be changed using the ddof argument
Parameters: axis : {index (0), columns (1)}
skipna : bool, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA
level : int or level name, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series
ddof : int, default 1
Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements.
numeric_only : bool, default None (Not supported in Dask)
Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.
Returns: Series or DataFrame (if level specified)
-
shape
¶ Return a tuple representing the dimensionality of a Series.
The single element of the tuple is a Delayed result.
Examples
>>> series.shape # doctest: +SKIP # (dd.Scalar<size-ag..., dtype=int64>,)
-
shift
(periods=1, freq=None, axis=0)¶ Shift index by desired number of periods with an optional time freq.
This docstring was copied from pandas.core.frame.DataFrame.shift.
Some inconsistencies with the Dask version may exist.
When freq is not passed, shift the index without realigning the data. If freq is passed (in this case, the index must be date or datetime, or it will raise a NotImplementedError), the index will be increased using the periods and the freq.
Parameters: periods : int
Number of periods to shift. Can be positive or negative.
freq : DateOffset, tseries.offsets, timedelta, or str, optional
Offset to use from the tseries module or time rule (e.g. ‘EOM’). If freq is specified then the index values are shifted but the data is not realigned. That is, use freq if you would like to extend the index when shifting and preserve the original data.
axis : {0 or ‘index’, 1 or ‘columns’, None}, default None
Shift direction.
fill_value : object, optional (Not supported in Dask)
The scalar value to use for newly introduced missing values. the default depends on the dtype of self. For numeric data,
np.nan
is used. For datetime, timedelta, or period data, etc.NaT
is used. For extension dtypes,self.dtype.na_value
is used.Changed in version 0.24.0.
Returns: DataFrame
Copy of input object, shifted.
See also
Index.shift
- Shift values of Index.
DatetimeIndex.shift
- Shift values of DatetimeIndex.
PeriodIndex.shift
- Shift values of PeriodIndex.
tshift
- Shift the time index, using the index’s frequency if available.
Examples
>>> df = pd.DataFrame({'Col1': [10, 20, 15, 30, 45], # doctest: +SKIP ... 'Col2': [13, 23, 18, 33, 48], ... 'Col3': [17, 27, 22, 37, 52]})
>>> df.shift(periods=3) # doctest: +SKIP Col1 Col2 Col3 0 NaN NaN NaN 1 NaN NaN NaN 2 NaN NaN NaN 3 10.0 13.0 17.0 4 20.0 23.0 27.0
>>> df.shift(periods=1, axis='columns') # doctest: +SKIP Col1 Col2 Col3 0 NaN 10.0 13.0 1 NaN 20.0 23.0 2 NaN 15.0 18.0 3 NaN 30.0 33.0 4 NaN 45.0 48.0
>>> df.shift(periods=3, fill_value=0) # doctest: +SKIP Col1 Col2 Col3 0 0 0 0 1 0 0 0 2 0 0 0 3 10 13 17 4 20 23 27
-
size
¶ Size of the Series or DataFrame as a Delayed object.
Examples
>>> series.size # doctest: +SKIP dd.Scalar<size-ag..., dtype=int64>
-
squeeze
()¶ Squeeze 1 dimensional axis objects into scalars.
This docstring was copied from pandas.core.series.Series.squeeze.
Some inconsistencies with the Dask version may exist.
Series or DataFrames with a single element are squeezed to a scalar. DataFrames with a single column or a single row are squeezed to a Series. Otherwise the object is unchanged.
This method is most useful when you don’t know if your object is a Series or DataFrame, but you do know it has just a single column. In that case you can safely call squeeze to ensure you have a Series.
Parameters: axis : {0 or ‘index’, 1 or ‘columns’, None}, default None (Not supported in Dask)
A specific axis to squeeze. By default, all length-1 axes are squeezed.
New in version 0.20.0.
Returns: DataFrame, Series, or scalar
The projection after squeezing axis or all the axes.
See also
Series.iloc
- Integer-location based indexing for selecting scalars.
DataFrame.iloc
- Integer-location based indexing for selecting Series.
Series.to_frame
- Inverse of DataFrame.squeeze for a single-column DataFrame.
Examples
>>> primes = pd.Series([2, 3, 5, 7]) # doctest: +SKIP
Slicing might produce a Series with a single value:
>>> even_primes = primes[primes % 2 == 0] # doctest: +SKIP >>> even_primes # doctest: +SKIP 0 2 dtype: int64
>>> even_primes.squeeze() # doctest: +SKIP 2
Squeezing objects with more than one value in every axis does nothing:
>>> odd_primes = primes[primes % 2 == 1] # doctest: +SKIP >>> odd_primes # doctest: +SKIP 1 3 2 5 3 7 dtype: int64
>>> odd_primes.squeeze() # doctest: +SKIP 1 3 2 5 3 7 dtype: int64
Squeezing is even more effective when used with DataFrames.
>>> df = pd.DataFrame([[1, 2], [3, 4]], columns=['a', 'b']) # doctest: +SKIP >>> df # doctest: +SKIP a b 0 1 2 1 3 4
Slicing a single column will produce a DataFrame with the columns having only one value:
>>> df_a = df[['a']] # doctest: +SKIP >>> df_a # doctest: +SKIP a 0 1 1 3
So the columns can be squeezed down, resulting in a Series:
>>> df_a.squeeze('columns') # doctest: +SKIP 0 1 1 3 Name: a, dtype: int64
Slicing a single row from a single column will produce a single scalar DataFrame:
>>> df_0a = df.loc[df.index < 1, ['a']] # doctest: +SKIP >>> df_0a # doctest: +SKIP a 0 1
Squeezing the rows produces a single scalar Series:
>>> df_0a.squeeze('rows') # doctest: +SKIP a 1 Name: 0, dtype: int64
Squeezing all axes will project directly into a scalar:
>>> df_0a.squeeze() # doctest: +SKIP 1
-
std
(axis=None, skipna=True, ddof=1, split_every=False, dtype=None, out=None)¶ Return sample standard deviation over requested axis.
This docstring was copied from pandas.core.frame.DataFrame.std.
Some inconsistencies with the Dask version may exist.
Normalized by N-1 by default. This can be changed using the ddof argument
Parameters: axis : {index (0), columns (1)}
skipna : bool, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA
level : int or level name, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series
ddof : int, default 1
Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements.
numeric_only : bool, default None (Not supported in Dask)
Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.
Returns: Series or DataFrame (if level specified)
-
str
¶ Namespace for string methods
-
sub
(other, level=None, fill_value=None, axis=0)¶ Return Subtraction of series and other, element-wise (binary operator sub).
This docstring was copied from pandas.core.series.Series.sub.
Some inconsistencies with the Dask version may exist.
Equivalent to
series - other
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) # doctest: +SKIP >>> a # doctest: +SKIP a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) # doctest: +SKIP >>> b # doctest: +SKIP a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.subtract(b, fill_value=0) # doctest: +SKIP a 0.0 b 1.0 c 1.0 d -1.0 e NaN dtype: float64
-
sum
(axis=None, skipna=True, split_every=False, dtype=None, out=None, min_count=None)¶ Return the sum of the values for the requested axis.
This docstring was copied from pandas.core.frame.DataFrame.sum.
Some inconsistencies with the Dask version may exist.
This is equivalent to the method
numpy.sum
.Parameters: axis : {index (0), columns (1)}
Axis for the function to be applied on.
skipna : bool, default True
Exclude NA/null values when computing the result.
level : int or level name, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series.
numeric_only : bool, default None (Not supported in Dask)
Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.
min_count : int, default 0
The required number of valid values to perform the operation. If fewer than
min_count
non-NA values are present the result will be NA.New in version 0.22.0: Added with the default being 0. This means the sum of an all-NA or empty Series is 0, and the product of an all-NA or empty Series is 1.
**kwargs
Additional keyword arguments to be passed to the function.
Returns: Series or DataFrame (if level specified)
See also
Series.sum
- Return the sum.
Series.min
- Return the minimum.
Series.max
- Return the maximum.
Series.idxmin
- Return the index of the minimum.
Series.idxmax
- Return the index of the maximum.
DataFrame.sum
- Return the sum over the requested axis.
DataFrame.min
- Return the minimum over the requested axis.
DataFrame.max
- Return the maximum over the requested axis.
DataFrame.idxmin
- Return the index of the minimum over the requested axis.
DataFrame.idxmax
- Return the index of the maximum over the requested axis.
Examples
>>> idx = pd.MultiIndex.from_arrays([ # doctest: +SKIP ... ['warm', 'warm', 'cold', 'cold'], ... ['dog', 'falcon', 'fish', 'spider']], ... names=['blooded', 'animal']) >>> s = pd.Series([4, 2, 0, 8], name='legs', index=idx) # doctest: +SKIP >>> s # doctest: +SKIP blooded animal warm dog 4 falcon 2 cold fish 0 spider 8 Name: legs, dtype: int64
>>> s.sum() # doctest: +SKIP 14
Sum using level names, as well as indices.
>>> s.sum(level='blooded') # doctest: +SKIP blooded warm 6 cold 8 Name: legs, dtype: int64
>>> s.sum(level=0) # doctest: +SKIP blooded warm 6 cold 8 Name: legs, dtype: int64
By default, the sum of an empty or all-NA Series is
0
.>>> pd.Series([]).sum() # min_count=0 is the default # doctest: +SKIP 0.0
This can be controlled with the
min_count
parameter. For example, if you’d like the sum of an empty series to be NaN, passmin_count=1
.>>> pd.Series([]).sum(min_count=1) # doctest: +SKIP nan
Thanks to the
skipna
parameter,min_count
handles all-NA and empty series identically.>>> pd.Series([np.nan]).sum() # doctest: +SKIP 0.0
>>> pd.Series([np.nan]).sum(min_count=1) # doctest: +SKIP nan
-
tail
(n=5, compute=True)¶ Last n rows of the dataset
Caveat, the only checks the last n rows of the last partition.
-
to_bag
(index=False)¶ Create a Dask Bag from a Series
-
to_csv
(filename, **kwargs)¶ Store Dask DataFrame to CSV files
One filename per partition will be created. You can specify the filenames in a variety of ways.
Use a globstring:
>>> df.to_csv('/path/to/data/export-*.csv')
The * will be replaced by the increasing sequence 0, 1, 2, …
/path/to/data/export-0.csv /path/to/data/export-1.csv
Use a globstring and a
name_function=
keyword argument. The name_function function should expect an integer and produce a string. Strings produced by name_function must preserve the order of their respective partition indices.>>> from datetime import date, timedelta >>> def name(i): ... return str(date(2015, 1, 1) + i * timedelta(days=1))
>>> name(0) '2015-01-01' >>> name(15) '2015-01-16'
>>> df.to_csv('/path/to/data/export-*.csv', name_function=name) # doctest: +SKIP
/path/to/data/export-2015-01-01.csv /path/to/data/export-2015-01-02.csv ...
You can also provide an explicit list of paths:
>>> paths = ['/path/to/data/alice.csv', '/path/to/data/bob.csv', ...] >>> df.to_csv(paths)
Parameters: filename : string
Path glob indicating the naming scheme for the output files
name_function : callable, default None
Function accepting an integer (partition index) and producing a string to replace the asterisk in the given filename globstring. Should preserve the lexicographic order of partitions. Not supported when single_file is True.
single_file : bool, default False
Whether to save everything into a single CSV file. Under the single file mode, each partition is appended at the end of the specified CSV file. Note that not all filesystems support the append mode and thus the single file mode, especially on cloud storage systems such as S3 or GCS. A warning will be issued when writing to a file that is not backed by a local filesystem.
compression : string or None
String like ‘gzip’ or ‘xz’. Must support efficient random access. Filenames with extensions corresponding to known compression algorithms (gz, bz2) will be compressed accordingly automatically
sep : character, default ‘,’
Field delimiter for the output file
na_rep : string, default ‘’
Missing data representation
float_format : string, default None
Format string for floating point numbers
columns : sequence, optional
Columns to write
header : boolean or list of string, default True
Write out column names. If a list of string is given it is assumed to be aliases for the column names
header_first_partition_only : boolean, default None
If set to True, only write the header row in the first output file. By default, headers are written to all partitions under the multiple file mode (single_file is False) and written only once under the single file mode (single_file is True). It must not be False under the single file mode.
index : boolean, default True
Write row names (index)
index_label : string or sequence, or False, default None
Column label for index column(s) if desired. If None is given, and header and index are True, then the index names are used. A sequence should be given if the DataFrame uses MultiIndex. If False do not print fields for index names. Use index_label=False for easier importing in R
nanRep : None
deprecated, use na_rep
mode : str
Python write mode, default ‘w’
encoding : string, optional
A string representing the encoding to use in the output file, defaults to ‘ascii’ on Python 2 and ‘utf-8’ on Python 3.
compression : string, optional
a string representing the compression to use in the output file, allowed values are ‘gzip’, ‘bz2’, ‘xz’, only used when the first argument is a filename
line_terminator : string, default ‘n’
The newline character or character sequence to use in the output file
quoting : optional constant from csv module
defaults to csv.QUOTE_MINIMAL
quotechar : string (length 1), default ‘”’
character used to quote fields
doublequote : boolean, default True
Control quoting of quotechar inside a field
escapechar : string (length 1), default None
character used to escape sep and quotechar when appropriate
chunksize : int or None
rows to write at a time
tupleize_cols : boolean, default False
write multi_index columns as a list of tuples (if True) or new (expanded format) if False)
date_format : string, default None
Format string for datetime objects
decimal: string, default ‘.’
Character recognized as decimal separator. E.g. use ‘,’ for European data
storage_options: dict
Parameters passed on to the backend filesystem class.
Returns: The names of the file written if they were computed right away
If not, the delayed tasks associated to the writing of the files
Raises: ValueError
If header_first_partition_only is set to False or name_function is specified when single_file is True.
-
to_dask_array
(lengths=None)¶ Convert a dask DataFrame to a dask array.
Parameters: lengths : bool or Sequence of ints, optional
How to determine the chunks sizes for the output array. By default, the output array will have unknown chunk lengths along the first axis, which can cause some later operations to fail.
- True : immediately compute the length of each partition
- Sequence : a sequence of integers to use for the chunk sizes on the first axis. These values are not validated for correctness, beyond ensuring that the number of items matches the number of partitions.
-
to_delayed
(optimize_graph=True)¶ Convert into a list of
dask.delayed
objects, one per partition.Parameters: optimize_graph : bool, optional
If True [default], the graph is optimized before converting into
dask.delayed
objects.See also
Examples
>>> partitions = df.to_delayed() # doctest: +SKIP
-
to_frame
(name=None)¶ Convert Series to DataFrame.
This docstring was copied from pandas.core.series.Series.to_frame.
Some inconsistencies with the Dask version may exist.
Parameters: name : object, default None
The passed name should substitute for the series name (if it has one).
Returns: DataFrame
DataFrame representation of Series.
Examples
>>> s = pd.Series(["a", "b", "c"], # doctest: +SKIP ... name="vals") >>> s.to_frame() # doctest: +SKIP vals 0 a 1 b 2 c
-
to_hdf
(path_or_buf, key, mode='a', append=False, **kwargs)¶ Store Dask Dataframe to Hierarchical Data Format (HDF) files
This is a parallel version of the Pandas function of the same name. Please see the Pandas docstring for more detailed information about shared keyword arguments.
This function differs from the Pandas version by saving the many partitions of a Dask DataFrame in parallel, either to many files, or to many datasets within the same file. You may specify this parallelism with an asterix
*
within the filename or datapath, and an optionalname_function
. The asterix will be replaced with an increasing sequence of integers starting from0
or with the result of callingname_function
on each of those integers.This function only supports the Pandas
'table'
format, not the more specialized'fixed'
format.Parameters: path : string, pathlib.Path
Path to a target filename. Supports strings,
pathlib.Path
, or any object implementing the__fspath__
protocol. May contain a*
to denote many filenames.key : string
Datapath within the files. May contain a
*
to denote many locationsname_function : function
A function to convert the
*
in the above options to a string. Should take in a number from 0 to the number of partitions and return a string. (see examples below)compute : bool
Whether or not to execute immediately. If False then this returns a
dask.Delayed
value.lock : Lock, optional
Lock to use to prevent concurrency issues. By default a
threading.Lock
,multiprocessing.Lock
orSerializableLock
will be used depending on your scheduler if a lock is required. See dask.utils.get_scheduler_lock for more information about lock selection.scheduler : string
The scheduler to use, like “threads” or “processes”
**other:
See pandas.to_hdf for more information
Returns: filenames : list
Returned if
compute
is True. List of file names that each partition is saved to.delayed : dask.Delayed
Returned if
compute
is False. Delayed object to executeto_hdf
when computed.See also
Examples
Save Data to a single file
>>> df.to_hdf('output.hdf', '/data') # doctest: +SKIP
Save data to multiple datapaths within the same file:
>>> df.to_hdf('output.hdf', '/data-*') # doctest: +SKIP
Save data to multiple files:
>>> df.to_hdf('output-*.hdf', '/data') # doctest: +SKIP
Save data to multiple files, using the multiprocessing scheduler:
>>> df.to_hdf('output-*.hdf', '/data', scheduler='processes') # doctest: +SKIP
Specify custom naming scheme. This writes files as ‘2000-01-01.hdf’, ‘2000-01-02.hdf’, ‘2000-01-03.hdf’, etc..
>>> from datetime import date, timedelta >>> base = date(year=2000, month=1, day=1) >>> def name_function(i): ... ''' Convert integer 0 to n to a string ''' ... return base + timedelta(days=i)
>>> df.to_hdf('*.hdf', '/data', name_function=name_function) # doctest: +SKIP
-
to_json
(filename, *args, **kwargs)¶ See dd.to_json docstring for more information
-
to_string
(max_rows=5)¶ Render a string representation of the Series.
This docstring was copied from pandas.core.series.Series.to_string.
Some inconsistencies with the Dask version may exist.
Parameters: buf : StringIO-like, optional (Not supported in Dask)
Buffer to write to.
na_rep : str, optional (Not supported in Dask)
String representation of NaN to use, default ‘NaN’.
float_format : one-parameter function, optional (Not supported in Dask)
Formatter function to apply to columns’ elements if they are floats, default None.
header : bool, default True (Not supported in Dask)
Add the Series header (index name).
index : bool, optional (Not supported in Dask)
Add index (row) labels, default True.
length : bool, default False (Not supported in Dask)
Add the Series length.
dtype : bool, default False (Not supported in Dask)
Add the Series dtype.
name : bool, default False (Not supported in Dask)
Add the Series name if not None.
max_rows : int, optional
Maximum number of rows to show before truncating. If None, show all.
min_rows : int, optional (Not supported in Dask)
The number of rows to display in a truncated repr (when number of rows is above max_rows).
Returns: str or None
String representation of Series if
buf=None
, otherwise None.
-
to_timestamp
(freq=None, how='start', axis=0)¶ Cast to DatetimeIndex of timestamps, at beginning of period.
This docstring was copied from pandas.core.frame.DataFrame.to_timestamp.
Some inconsistencies with the Dask version may exist.
Parameters: freq : str, default frequency of PeriodIndex
Desired frequency.
how : {‘s’, ‘e’, ‘start’, ‘end’}
Convention for converting period to timestamp; start of period vs. end.
axis : {0 or ‘index’, 1 or ‘columns’}, default 0
The axis to convert (the index by default).
copy : bool, default True (Not supported in Dask)
If False then underlying input data is not copied.
Returns: DataFrame with DatetimeIndex
-
truediv
(other, level=None, fill_value=None, axis=0)¶ Return Floating division of series and other, element-wise (binary operator truediv).
This docstring was copied from pandas.core.series.Series.truediv.
Some inconsistencies with the Dask version may exist.
Equivalent to
series / other
, but with support to substitute a fill_value for missing data in one of the inputs.Parameters: other : Series or scalar value
fill_value : None or float value, default None (NaN)
Fill existing missing (NaN) values, and any new element needed for successful Series alignment, with this value before computation. If data in both corresponding Series locations is missing the result will be missing.
level : int or name
Broadcast across a level, matching Index values on the passed MultiIndex level.
Returns: Series
The result of the operation.
See also
Examples
>>> a = pd.Series([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd']) # doctest: +SKIP >>> a # doctest: +SKIP a 1.0 b 1.0 c 1.0 d NaN dtype: float64 >>> b = pd.Series([1, np.nan, 1, np.nan], index=['a', 'b', 'd', 'e']) # doctest: +SKIP >>> b # doctest: +SKIP a 1.0 b NaN d 1.0 e NaN dtype: float64 >>> a.divide(b, fill_value=0) # doctest: +SKIP a 1.0 b inf c inf d 0.0 e NaN dtype: float64
-
unique
(split_every=None, split_out=1)¶ Return Series of unique values in the object. Includes NA values.
Returns: uniques : Series
-
value_counts
(split_every=None, split_out=1)¶ Return a Series containing counts of unique values.
This docstring was copied from pandas.core.series.Series.value_counts.
Some inconsistencies with the Dask version may exist.
The resulting object will be in descending order so that the first element is the most frequently-occurring element. Excludes NA values by default.
Parameters: normalize : boolean, default False (Not supported in Dask)
If True then the object returned will contain the relative frequencies of the unique values.
sort : boolean, default True (Not supported in Dask)
Sort by frequencies.
ascending : boolean, default False (Not supported in Dask)
Sort in ascending order.
bins : integer, optional (Not supported in Dask)
Rather than count values, group them into half-open bins, a convenience for
pd.cut
, only works with numeric data.dropna : boolean, default True (Not supported in Dask)
Don’t include counts of NaN.
Returns: Series
See also
Series.count
- Number of non-NA elements in a Series.
DataFrame.count
- Number of non-NA elements in a DataFrame.
Examples
>>> index = pd.Index([3, 1, 2, 3, 4, np.nan]) # doctest: +SKIP >>> index.value_counts() # doctest: +SKIP 3.0 2 4.0 1 2.0 1 1.0 1 dtype: int64
With normalize set to True, returns the relative frequency by dividing all values by the sum of values.
>>> s = pd.Series([3, 1, 2, 3, 4, np.nan]) # doctest: +SKIP >>> s.value_counts(normalize=True) # doctest: +SKIP 3.0 0.4 4.0 0.2 2.0 0.2 1.0 0.2 dtype: float64
bins
Bins can be useful for going from a continuous variable to a categorical variable; instead of counting unique apparitions of values, divide the index in the specified number of half-open bins.
>>> s.value_counts(bins=3) # doctest: +SKIP (2.0, 3.0] 2 (0.996, 2.0] 2 (3.0, 4.0] 1 dtype: int64
dropna
With dropna set to False we can also see NaN index values.
>>> s.value_counts(dropna=False) # doctest: +SKIP 3.0 2 NaN 1 4.0 1 2.0 1 1.0 1 dtype: int64
-
values
¶ Return a dask.array of the values of this dataframe
Warning: This creates a dask.array without precise shape information. Operations that depend on shape information, like slicing or reshaping, will not work.
-
var
(axis=None, skipna=True, ddof=1, split_every=False, dtype=None, out=None)¶ Return unbiased variance over requested axis.
This docstring was copied from pandas.core.frame.DataFrame.var.
Some inconsistencies with the Dask version may exist.
Normalized by N-1 by default. This can be changed using the ddof argument
Parameters: axis : {index (0), columns (1)}
skipna : bool, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA
level : int or level name, default None (Not supported in Dask)
If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a Series
ddof : int, default 1
Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements.
numeric_only : bool, default None (Not supported in Dask)
Include only float, int, boolean columns. If None, will attempt to use everything, then use only numeric data. Not implemented for Series.
Returns: Series or DataFrame (if level specified)
-
visualize
(filename='mydask', format=None, optimize_graph=False, **kwargs)¶ Render the computation of this object’s task graph using graphviz.
Requires
graphviz
to be installed.Parameters: filename : str or None, optional
The name (without an extension) of the file to write to disk. If filename is None, no file will be written, and we communicate with dot using only pipes.
format : {‘png’, ‘pdf’, ‘dot’, ‘svg’, ‘jpeg’, ‘jpg’}, optional
Format in which to write output file. Default is ‘png’.
optimize_graph : bool, optional
If True, the graph is optimized before rendering. Otherwise, the graph is displayed as is. Default is False.
color: {None, ‘order’}, optional
Options to color nodes. Provide
cmap=
keyword for additional colormap**kwargs
Additional keyword arguments to forward to
to_graphviz
.Returns: result : IPython.diplay.Image, IPython.display.SVG, or None
See dask.dot.dot_graph for more information.
See also
dask.base.visualize
,dask.dot.dot_graph
Notes
For more information on optimization see here:
https://docs.dask.org/en/latest/optimize.html
Examples
>>> x.visualize(filename='dask.pdf') # doctest: +SKIP >>> x.visualize(filename='dask.pdf', color='order') # doctest: +SKIP
-
where
(cond, other=nan)¶ Replace values where the condition is False.
This docstring was copied from pandas.core.frame.DataFrame.where.
Some inconsistencies with the Dask version may exist.
Parameters: cond : boolean Series/DataFrame, array-like, or callable
Where cond is True, keep the original value. Where False, replace with corresponding value from other. If cond is callable, it is computed on the Series/DataFrame and should return boolean Series/DataFrame or array. The callable must not change input Series/DataFrame (though pandas doesn’t check it).
New in version 0.18.1: A callable can be used as cond.
other : scalar, Series/DataFrame, or callable
Entries where cond is False are replaced with corresponding value from other. If other is callable, it is computed on the Series/DataFrame and should return scalar or Series/DataFrame. The callable must not change input Series/DataFrame (though pandas doesn’t check it).
New in version 0.18.1: A callable can be used as other.
inplace : bool, default False (Not supported in Dask)
Whether to perform the operation in place on the data.
axis : int, default None (Not supported in Dask)
Alignment axis if needed.
level : int, default None (Not supported in Dask)
Alignment level if needed.
errors : str, {‘raise’, ‘ignore’}, default ‘raise’ (Not supported in Dask)
Note that currently this parameter won’t affect the results and will always coerce to a suitable dtype.
- ‘raise’ : allow exceptions to be raised.
- ‘ignore’ : suppress exceptions. On error return original object.
try_cast : bool, default False (Not supported in Dask)
Try to cast the result back to the input type (if possible).
Returns: Same type as caller
See also
DataFrame.mask()
- Return an object of same shape as self.
Notes
The where method is an application of the if-then idiom. For each element in the calling DataFrame, if
cond
isTrue
the element is used; otherwise the corresponding element from the DataFrameother
is used.The signature for
DataFrame.where()
differs fromnumpy.where()
. Roughlydf1.where(m, df2)
is equivalent tonp.where(m, df1, df2)
.For further details and examples see the
where
documentation in indexing.Examples
>>> s = pd.Series(range(5)) # doctest: +SKIP >>> s.where(s > 0) # doctest: +SKIP 0 NaN 1 1.0 2 2.0 3 3.0 4 4.0 dtype: float64
>>> s.mask(s > 0) # doctest: +SKIP 0 0.0 1 NaN 2 NaN 3 NaN 4 NaN dtype: float64
>>> s.where(s > 1, 10) # doctest: +SKIP 0 10 1 10 2 2 3 3 4 4 dtype: int64
>>> df = pd.DataFrame(np.arange(10).reshape(-1, 2), columns=['A', 'B']) # doctest: +SKIP >>> df # doctest: +SKIP A B 0 0 1 1 2 3 2 4 5 3 6 7 4 8 9 >>> m = df % 3 == 0 # doctest: +SKIP >>> df.where(m, -df) # doctest: +SKIP A B 0 0 -1 1 -2 3 2 -4 -5 3 6 -7 4 -8 9 >>> df.where(m, -df) == np.where(m, df, -df) # doctest: +SKIP A B 0 True True 1 True True 2 True True 3 True True 4 True True >>> df.where(m, -df) == df.mask(~m, -df) # doctest: +SKIP A B 0 True True 1 True True 2 True True 3 True True 4 True True
-
DataFrameGroupBy¶
-
class
dask.dataframe.groupby.
DataFrameGroupBy
(df, by=None, slice=None, group_keys=True, dropna=None, sort=None)¶ -
agg
(arg, split_every=None, split_out=1)¶ Aggregate using one or more operations over the specified axis.
This docstring was copied from pandas.core.groupby.generic.DataFrameGroupBy.agg.
Some inconsistencies with the Dask version may exist.
Parameters: func : function, str, list or dict
Function to use for aggregating the data. If a function, must either work when passed a DataFrame or when passed to DataFrame.apply.
Accepted combinations are:
- function
- string function name
- list of functions and/or function names, e.g.
[np.sum, 'mean']
- dict of axis labels -> functions, function names or list of such.
*args
Positional arguments to pass to func.
**kwargs
Keyword arguments to pass to func.
Returns: scalar, Series or DataFrame
The return can be:
- scalar : when Series.agg is called with single function
- Series : when DataFrame.agg is called with a single function
- DataFrame : when DataFrame.agg is called with several functions
Return scalar, Series or DataFrame.
See also
pandas.DataFrame.groupby.apply
,pandas.DataFrame.groupby.transform
,pandas.DataFrame.aggregate
Notes
agg is an alias for aggregate. Use the alias.
A passed user-defined-function will be passed a Series for evaluation.
Examples
>>> df = pd.DataFrame({'A': [1, 1, 2, 2], # doctest: +SKIP ... 'B': [1, 2, 3, 4], ... 'C': np.random.randn(4)})
>>> df # doctest: +SKIP A B C 0 1 1 0.362838 1 1 2 0.227877 2 2 3 1.267767 3 2 4 -0.562860
The aggregation is for each column.
>>> df.groupby('A').agg('min') # doctest: +SKIP B C A 1 1 0.227877 2 3 -0.562860
Multiple aggregations
>>> df.groupby('A').agg(['min', 'max']) # doctest: +SKIP B C min max min max A 1 1 2 0.227877 0.362838 2 3 4 -0.562860 1.267767
Select a column for aggregation
>>> df.groupby('A').B.agg(['min', 'max']) # doctest: +SKIP min max A 1 1 2 2 3 4
Different aggregations per column
>>> df.groupby('A').agg({'B': ['min', 'max'], 'C': 'sum'}) # doctest: +SKIP B C min max sum A 1 1 2 0.590716 2 3 4 0.704907
To control the output names with different aggregations per column, pandas supports “named aggregation”
>>> df.groupby("A").agg( # doctest: +SKIP ... b_min=pd.NamedAgg(column="B", aggfunc="min"), ... c_sum=pd.NamedAgg(column="C", aggfunc="sum")) b_min c_sum A 1 1 -1.956929 2 3 -0.322183
- The keywords are the output column names
- The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column.
Pandas provides the
pandas.NamedAgg
namedtuple with the fields['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can be a callable or a string alias.
See Named aggregation for more.
-
aggregate
(arg, split_every=None, split_out=1)¶ Aggregate using one or more operations over the specified axis.
This docstring was copied from pandas.core.groupby.generic.DataFrameGroupBy.aggregate.
Some inconsistencies with the Dask version may exist.
Parameters: func : function, str, list or dict
Function to use for aggregating the data. If a function, must either work when passed a DataFrame or when passed to DataFrame.apply.
Accepted combinations are:
- function
- string function name
- list of functions and/or function names, e.g.
[np.sum, 'mean']
- dict of axis labels -> functions, function names or list of such.
*args
Positional arguments to pass to func.
**kwargs
Keyword arguments to pass to func.
Returns: scalar, Series or DataFrame
The return can be:
- scalar : when Series.agg is called with single function
- Series : when DataFrame.agg is called with a single function
- DataFrame : when DataFrame.agg is called with several functions
Return scalar, Series or DataFrame.
See also
pandas.DataFrame.groupby.apply
,pandas.DataFrame.groupby.transform
,pandas.DataFrame.aggregate
Notes
agg is an alias for aggregate. Use the alias.
A passed user-defined-function will be passed a Series for evaluation.
Examples
>>> df = pd.DataFrame({'A': [1, 1, 2, 2], # doctest: +SKIP ... 'B': [1, 2, 3, 4], ... 'C': np.random.randn(4)})
>>> df # doctest: +SKIP A B C 0 1 1 0.362838 1 1 2 0.227877 2 2 3 1.267767 3 2 4 -0.562860
The aggregation is for each column.
>>> df.groupby('A').agg('min') # doctest: +SKIP B C A 1 1 0.227877 2 3 -0.562860
Multiple aggregations
>>> df.groupby('A').agg(['min', 'max']) # doctest: +SKIP B C min max min max A 1 1 2 0.227877 0.362838 2 3 4 -0.562860 1.267767
Select a column for aggregation
>>> df.groupby('A').B.agg(['min', 'max']) # doctest: +SKIP min max A 1 1 2 2 3 4
Different aggregations per column
>>> df.groupby('A').agg({'B': ['min', 'max'], 'C': 'sum'}) # doctest: +SKIP B C min max sum A 1 1 2 0.590716 2 3 4 0.704907
To control the output names with different aggregations per column, pandas supports “named aggregation”
>>> df.groupby("A").agg( # doctest: +SKIP ... b_min=pd.NamedAgg(column="B", aggfunc="min"), ... c_sum=pd.NamedAgg(column="C", aggfunc="sum")) b_min c_sum A 1 1 -1.956929 2 3 -0.322183
- The keywords are the output column names
- The values are tuples whose first element is the column to select
and the second element is the aggregation to apply to that column.
Pandas provides the
pandas.NamedAgg
namedtuple with the fields['column', 'aggfunc']
to make it clearer what the arguments are. As usual, the aggregation can be a callable or a string alias.
See Named aggregation for more.
-
apply
(func, *args, **kwargs)¶ Parallel version of pandas GroupBy.apply
This mimics the pandas version except for the following:
- If the grouper does not align with the index then this causes a full shuffle. The order of rows within each group may not be preserved.
- Dask’s GroupBy.apply is not appropriate for aggregations. For custom
aggregations, use
dask.dataframe.groupby.Aggregation
.
Warning
Pandas’ groupby-apply can be used to to apply arbitrary functions, including aggregations that result in one row per group. Dask’s groupby-apply will apply
func
once to each partition-group pair, so whenfunc
is a reduction you’ll end up with one row per partition-group pair. To apply a custom aggregation with Dask, usedask.dataframe.groupby.Aggregation
.Parameters: func: function
Function to apply
args, kwargs : Scalar, Delayed or object
Arguments and keywords to pass to the function.
meta : pd.DataFrame, pd.Series, dict, iterable, tuple, optional
An empty
pd.DataFrame
orpd.Series
that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of aDataFrame
, adict
of{name: dtype}
or iterable of(name, dtype)
can be provided (note that the order of the names should match the order of the columns). Instead of a series, a tuple of(name, dtype)
can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providingmeta
is recommended. For more information, seedask.dataframe.utils.make_meta
.Returns: applied : Series or DataFrame depending on columns keyword
-
corr
(ddof=1, split_every=None, split_out=1)¶ Compute pairwise correlation of columns, excluding NA/null values.
This docstring was copied from pandas.core.frame.DataFrame.corr.
Some inconsistencies with the Dask version may exist.
Groupby correlation: corr(X, Y) = cov(X, Y) / (std_x * std_y)
Parameters: method : {‘pearson’, ‘kendall’, ‘spearman’} or callable (Not supported in Dask)
- pearson : standard correlation coefficient
- kendall : Kendall Tau correlation coefficient
- spearman : Spearman rank correlation
- callable: callable with input two 1d ndarrays
- and returning a float. Note that the returned matrix from corr will have 1 along the diagonals and will be symmetric regardless of the callable’s behavior .. versionadded:: 0.24.0
min_periods : int, optional (Not supported in Dask)
Minimum number of observations required per pair of columns to have a valid result. Currently only available for Pearson and Spearman correlation.
Returns: DataFrame
Correlation matrix.
See also
DataFrame.corrwith
,Series.corr
Examples
>>> def histogram_intersection(a, b): # doctest: +SKIP ... v = np.minimum(a, b).sum().round(decimals=1) ... return v >>> df = pd.DataFrame([(.2, .3), (.0, .6), (.6, .0), (.2, .1)], # doctest: +SKIP ... columns=['dogs', 'cats']) >>> df.corr(method=histogram_intersection) # doctest: +SKIP dogs cats dogs 1.0 0.3 cats 0.3 1.0
-
count
(split_every=None, split_out=1)¶ Compute count of group, excluding missing values.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.count.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame
Count of values within each group.
See also
Series.groupby
,DataFrame.groupby
-
cov
(ddof=1, split_every=None, split_out=1, std=False)¶ Compute pairwise covariance of columns, excluding NA/null values.
This docstring was copied from pandas.core.frame.DataFrame.cov.
Some inconsistencies with the Dask version may exist.
Groupby covariance is accomplished by
- Computing intermediate values for sum, count, and the product of all columns: a b c -> a*a, a*b, b*b, b*c, c*c.
- The values are then aggregated and the final covariance value is calculated: cov(X, Y) = X*Y - Xbar * Ybar
When std is True calculate Correlation
Compute the pairwise covariance among the series of a DataFrame. The returned data frame is the covariance matrix of the columns of the DataFrame.
Both NA and null values are automatically excluded from the calculation. (See the note below about bias from missing values.) A threshold can be set for the minimum number of observations for each value created. Comparisons with observations below this threshold will be returned as
NaN
.This method is generally used for the analysis of time series data to understand the relationship between different measures across time.
Parameters: min_periods : int, optional (Not supported in Dask)
Minimum number of observations required per pair of columns to have a valid result.
Returns: DataFrame
The covariance matrix of the series of the DataFrame.
See also
Series.cov
- Compute covariance with another Series.
core.window.EWM.cov
- Exponential weighted sample covariance.
core.window.Expanding.cov
- Expanding sample covariance.
core.window.Rolling.cov
- Rolling sample covariance.
Notes
Returns the covariance matrix of the DataFrame’s time series. The covariance is normalized by N-1.
For DataFrames that have Series that are missing data (assuming that data is missing at random) the returned covariance matrix will be an unbiased estimate of the variance and covariance between the member Series.
However, for many applications this estimate may not be acceptable because the estimate covariance matrix is not guaranteed to be positive semi-definite. This could lead to estimate correlations having absolute values which are greater than one, and/or a non-invertible covariance matrix. See Estimation of covariance matrices for more details.
Examples
>>> df = pd.DataFrame([(1, 2), (0, 3), (2, 0), (1, 1)], # doctest: +SKIP ... columns=['dogs', 'cats']) >>> df.cov() # doctest: +SKIP dogs cats dogs 0.666667 -1.000000 cats -1.000000 1.666667
>>> np.random.seed(42) # doctest: +SKIP >>> df = pd.DataFrame(np.random.randn(1000, 5), # doctest: +SKIP ... columns=['a', 'b', 'c', 'd', 'e']) >>> df.cov() # doctest: +SKIP a b c d e a 0.998438 -0.020161 0.059277 -0.008943 0.014144 b -0.020161 1.059352 -0.008543 -0.024738 0.009826 c 0.059277 -0.008543 1.010670 -0.001486 -0.000271 d -0.008943 -0.024738 -0.001486 0.921297 -0.013692 e 0.014144 0.009826 -0.000271 -0.013692 0.977795
Minimum number of periods
This method also supports an optional
min_periods
keyword that specifies the required minimum number of non-NA observations for each column pair in order to have a valid result:>>> np.random.seed(42) # doctest: +SKIP >>> df = pd.DataFrame(np.random.randn(20, 3), # doctest: +SKIP ... columns=['a', 'b', 'c']) >>> df.loc[df.index[:5], 'a'] = np.nan # doctest: +SKIP >>> df.loc[df.index[5:10], 'b'] = np.nan # doctest: +SKIP >>> df.cov(min_periods=12) # doctest: +SKIP a b c a 0.316741 NaN -0.150812 b NaN 1.248003 0.191417 c -0.150812 0.191417 0.895202
-
cumcount
(axis=None)¶ Number each item in each group from 0 to the length of that group - 1.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.cumcount.
Some inconsistencies with the Dask version may exist.
Essentially this is equivalent to
>>> self.apply(lambda x: pd.Series(np.arange(len(x)), x.index)) # doctest: +SKIP
Parameters: ascending : bool, default True (Not supported in Dask)
If False, number in reverse, from length of group - 1 to 0.
Returns: Series
Sequence number of each element within each group.
See also
ngroup
- Number the groups themselves.
Examples
>>> df = pd.DataFrame([['a'], ['a'], ['a'], ['b'], ['b'], ['a']], # doctest: +SKIP ... columns=['A']) >>> df # doctest: +SKIP A 0 a 1 a 2 a 3 b 4 b 5 a >>> df.groupby('A').cumcount() # doctest: +SKIP 0 0 1 1 2 2 3 0 4 1 5 3 dtype: int64 >>> df.groupby('A').cumcount(ascending=False) # doctest: +SKIP 0 3 1 2 2 1 3 1 4 0 5 0 dtype: int64
-
cumprod
(axis=0)¶ Cumulative product for each group.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.cumprod.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame See also
Series.groupby
,DataFrame.groupby
-
cumsum
(axis=0)¶ Cumulative sum for each group.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.cumsum.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame See also
Series.groupby
,DataFrame.groupby
-
first
(split_every=None, split_out=1)¶ Compute first of group values.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.first.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame
Computed first of values within each group.
-
get_group
(key)¶ Construct DataFrame from group with provided name.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.get_group.
Some inconsistencies with the Dask version may exist.
Parameters: name : object (Not supported in Dask)
the name of the group to get as a DataFrame
obj : DataFrame, default None (Not supported in Dask)
the DataFrame to take the DataFrame out of. If it is None, the object groupby was called on will be used
Returns: group : same type as obj
-
idxmax
(split_every=None, split_out=1, axis=None, skipna=True)¶ Return index of first occurrence of maximum over requested axis. NA/null values are excluded.
This docstring was copied from pandas.core.frame.DataFrame.idxmax.
Some inconsistencies with the Dask version may exist.
Parameters: axis : {0 or ‘index’, 1 or ‘columns’}, default 0
0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA.
Returns: Series
Indexes of maxima along the specified axis.
Raises: ValueError
- If the row/column is empty
See also
Series.idxmax
Notes
This method is the DataFrame version of
ndarray.argmax
.
-
idxmin
(split_every=None, split_out=1, axis=None, skipna=True)¶ Return index of first occurrence of minimum over requested axis. NA/null values are excluded.
This docstring was copied from pandas.core.frame.DataFrame.idxmin.
Some inconsistencies with the Dask version may exist.
Parameters: axis : {0 or ‘index’, 1 or ‘columns’}, default 0
0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA.
Returns: Series
Indexes of minima along the specified axis.
Raises: ValueError
- If the row/column is empty
See also
Series.idxmin
Notes
This method is the DataFrame version of
ndarray.argmin
.
-
last
(split_every=None, split_out=1)¶ Compute last of group values.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.last.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame
Computed last of values within each group.
-
max
(split_every=None, split_out=1)¶ Compute max of group values.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.max.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame
Computed max of values within each group.
-
mean
(split_every=None, split_out=1)¶ Compute mean of groups, excluding missing values.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.mean.
Some inconsistencies with the Dask version may exist.
Returns: pandas.Series or pandas.DataFrame See also
Series.groupby
,DataFrame.groupby
Examples
>>> df = pd.DataFrame({'A': [1, 1, 2, 1, 2], # doctest: +SKIP ... 'B': [np.nan, 2, 3, 4, 5], ... 'C': [1, 2, 1, 1, 2]}, columns=['A', 'B', 'C'])
Groupby one column and return the mean of the remaining columns in each group.
>>> df.groupby('A').mean() # doctest: +SKIP B C A 1 3.0 1.333333 2 4.0 1.500000
Groupby two columns and return the mean of the remaining column.
>>> df.groupby(['A', 'B']).mean() # doctest: +SKIP C A B 1 2.0 2 4.0 1 2 3.0 1 5.0 2
Groupby one column and return the mean of only particular column in the group.
>>> df.groupby('A')['B'].mean() # doctest: +SKIP A 1 3.0 2 4.0 Name: B, dtype: float64
-
min
(split_every=None, split_out=1)¶ Compute min of group values.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.min.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame
Computed min of values within each group.
-
prod
(split_every=None, split_out=1, min_count=None)¶ Compute prod of group values.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.prod.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame
Computed prod of values within each group.
-
size
(split_every=None, split_out=1)¶ Compute group sizes.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.size.
Some inconsistencies with the Dask version may exist.
Returns: Series
Number of rows in each group.
See also
Series.groupby
,DataFrame.groupby
-
std
(ddof=1, split_every=None, split_out=1)¶ Compute standard deviation of groups, excluding missing values.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.std.
Some inconsistencies with the Dask version may exist.
For multiple groupings, the result index will be a MultiIndex.
Parameters: ddof : integer, default 1
degrees of freedom
Returns: Series or DataFrame
Standard deviation of values within each group.
See also
Series.groupby
,DataFrame.groupby
-
sum
(split_every=None, split_out=1, min_count=None)¶ Compute sum of group values.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.sum.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame
Computed sum of values within each group.
-
transform
(func, *args, **kwargs)¶ Parallel version of pandas GroupBy.transform
This mimics the pandas version except for the following:
- If the grouper does not align with the index then this causes a full shuffle. The order of rows within each group may not be preserved.
- Dask’s GroupBy.transform is not appropriate for aggregations. For custom
aggregations, use
dask.dataframe.groupby.Aggregation
.
Warning
Pandas’ groupby-transform can be used to to apply arbitrary functions, including aggregations that result in one row per group. Dask’s groupby-transform will apply
func
once to each partition-group pair, so whenfunc
is a reduction you’ll end up with one row per partition-group pair. To apply a custom aggregation with Dask, usedask.dataframe.groupby.Aggregation
.Parameters: func: function
Function to apply
args, kwargs : Scalar, Delayed or object
Arguments and keywords to pass to the function.
meta : pd.DataFrame, pd.Series, dict, iterable, tuple, optional
An empty
pd.DataFrame
orpd.Series
that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of aDataFrame
, adict
of{name: dtype}
or iterable of(name, dtype)
can be provided (note that the order of the names should match the order of the columns). Instead of a series, a tuple of(name, dtype)
can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providingmeta
is recommended. For more information, seedask.dataframe.utils.make_meta
.Returns: applied : Series or DataFrame depending on columns keyword
-
var
(ddof=1, split_every=None, split_out=1)¶ Compute variance of groups, excluding missing values.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.var.
Some inconsistencies with the Dask version may exist.
For multiple groupings, the result index will be a MultiIndex.
Parameters: ddof : integer, default 1
degrees of freedom
Returns: Series or DataFrame
Variance of values within each group.
See also
Series.groupby
,DataFrame.groupby
-
SeriesGroupBy¶
-
class
dask.dataframe.groupby.
SeriesGroupBy
(df, by=None, slice=None, **kwargs)¶ -
agg
(arg, split_every=None, split_out=1)¶ Aggregate using one or more operations over the specified axis.
This docstring was copied from pandas.core.groupby.generic.SeriesGroupBy.agg.
Some inconsistencies with the Dask version may exist.
Parameters: func : function, str, list or dict
Function to use for aggregating the data. If a function, must either work when passed a Series or when passed to Series.apply.
Accepted combinations are:
- function
- string function name
- list of functions and/or function names, e.g.
[np.sum, 'mean']
- dict of axis labels -> functions, function names or list of such.
*args
Positional arguments to pass to func.
**kwargs
Keyword arguments to pass to func.
Returns: scalar, Series or DataFrame
The return can be:
- scalar : when Series.agg is called with single function
- Series : when DataFrame.agg is called with a single function
- DataFrame : when DataFrame.agg is called with several functions
Return scalar, Series or DataFrame.
See also
pandas.Series.groupby.apply
,pandas.Series.groupby.transform
,pandas.Series.aggregate
Notes
agg is an alias for aggregate. Use the alias.
A passed user-defined-function will be passed a Series for evaluation.
Examples
>>> s = pd.Series([1, 2, 3, 4]) # doctest: +SKIP
>>> s # doctest: +SKIP 0 1 1 2 2 3 3 4 dtype: int64
>>> s.groupby([1, 1, 2, 2]).min() # doctest: +SKIP 1 1 2 3 dtype: int64
>>> s.groupby([1, 1, 2, 2]).agg('min') # doctest: +SKIP 1 1 2 3 dtype: int64
>>> s.groupby([1, 1, 2, 2]).agg(['min', 'max']) # doctest: +SKIP min max 1 1 2 2 3 4
The output column names can be controlled by passing the desired column names and aggregations as keyword arguments.
>>> s.groupby([1, 1, 2, 2]).agg( # doctest: +SKIP ... minimum='min', ... maximum='max', ... ) minimum maximum 1 1 2 2 3 4
-
aggregate
(arg, split_every=None, split_out=1)¶ Aggregate using one or more operations over the specified axis.
This docstring was copied from pandas.core.groupby.generic.SeriesGroupBy.aggregate.
Some inconsistencies with the Dask version may exist.
Parameters: func : function, str, list or dict
Function to use for aggregating the data. If a function, must either work when passed a Series or when passed to Series.apply.
Accepted combinations are:
- function
- string function name
- list of functions and/or function names, e.g.
[np.sum, 'mean']
- dict of axis labels -> functions, function names or list of such.
*args
Positional arguments to pass to func.
**kwargs
Keyword arguments to pass to func.
Returns: scalar, Series or DataFrame
The return can be:
- scalar : when Series.agg is called with single function
- Series : when DataFrame.agg is called with a single function
- DataFrame : when DataFrame.agg is called with several functions
Return scalar, Series or DataFrame.
See also
pandas.Series.groupby.apply
,pandas.Series.groupby.transform
,pandas.Series.aggregate
Notes
agg is an alias for aggregate. Use the alias.
A passed user-defined-function will be passed a Series for evaluation.
Examples
>>> s = pd.Series([1, 2, 3, 4]) # doctest: +SKIP
>>> s # doctest: +SKIP 0 1 1 2 2 3 3 4 dtype: int64
>>> s.groupby([1, 1, 2, 2]).min() # doctest: +SKIP 1 1 2 3 dtype: int64
>>> s.groupby([1, 1, 2, 2]).agg('min') # doctest: +SKIP 1 1 2 3 dtype: int64
>>> s.groupby([1, 1, 2, 2]).agg(['min', 'max']) # doctest: +SKIP min max 1 1 2 2 3 4
The output column names can be controlled by passing the desired column names and aggregations as keyword arguments.
>>> s.groupby([1, 1, 2, 2]).agg( # doctest: +SKIP ... minimum='min', ... maximum='max', ... ) minimum maximum 1 1 2 2 3 4
-
apply
(func, *args, **kwargs)¶ Parallel version of pandas GroupBy.apply
This mimics the pandas version except for the following:
- If the grouper does not align with the index then this causes a full shuffle. The order of rows within each group may not be preserved.
- Dask’s GroupBy.apply is not appropriate for aggregations. For custom
aggregations, use
dask.dataframe.groupby.Aggregation
.
Warning
Pandas’ groupby-apply can be used to to apply arbitrary functions, including aggregations that result in one row per group. Dask’s groupby-apply will apply
func
once to each partition-group pair, so whenfunc
is a reduction you’ll end up with one row per partition-group pair. To apply a custom aggregation with Dask, usedask.dataframe.groupby.Aggregation
.Parameters: func: function
Function to apply
args, kwargs : Scalar, Delayed or object
Arguments and keywords to pass to the function.
meta : pd.DataFrame, pd.Series, dict, iterable, tuple, optional
An empty
pd.DataFrame
orpd.Series
that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of aDataFrame
, adict
of{name: dtype}
or iterable of(name, dtype)
can be provided (note that the order of the names should match the order of the columns). Instead of a series, a tuple of(name, dtype)
can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providingmeta
is recommended. For more information, seedask.dataframe.utils.make_meta
.Returns: applied : Series or DataFrame depending on columns keyword
-
corr
(ddof=1, split_every=None, split_out=1)¶ Compute pairwise correlation of columns, excluding NA/null values.
This docstring was copied from pandas.core.frame.DataFrame.corr.
Some inconsistencies with the Dask version may exist.
Groupby correlation: corr(X, Y) = cov(X, Y) / (std_x * std_y)
Parameters: method : {‘pearson’, ‘kendall’, ‘spearman’} or callable (Not supported in Dask)
- pearson : standard correlation coefficient
- kendall : Kendall Tau correlation coefficient
- spearman : Spearman rank correlation
- callable: callable with input two 1d ndarrays
- and returning a float. Note that the returned matrix from corr will have 1 along the diagonals and will be symmetric regardless of the callable’s behavior .. versionadded:: 0.24.0
min_periods : int, optional (Not supported in Dask)
Minimum number of observations required per pair of columns to have a valid result. Currently only available for Pearson and Spearman correlation.
Returns: DataFrame
Correlation matrix.
See also
DataFrame.corrwith
,Series.corr
Examples
>>> def histogram_intersection(a, b): # doctest: +SKIP ... v = np.minimum(a, b).sum().round(decimals=1) ... return v >>> df = pd.DataFrame([(.2, .3), (.0, .6), (.6, .0), (.2, .1)], # doctest: +SKIP ... columns=['dogs', 'cats']) >>> df.corr(method=histogram_intersection) # doctest: +SKIP dogs cats dogs 1.0 0.3 cats 0.3 1.0
-
count
(split_every=None, split_out=1)¶ Compute count of group, excluding missing values.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.count.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame
Count of values within each group.
See also
Series.groupby
,DataFrame.groupby
-
cov
(ddof=1, split_every=None, split_out=1, std=False)¶ Compute pairwise covariance of columns, excluding NA/null values.
This docstring was copied from pandas.core.frame.DataFrame.cov.
Some inconsistencies with the Dask version may exist.
Groupby covariance is accomplished by
- Computing intermediate values for sum, count, and the product of all columns: a b c -> a*a, a*b, b*b, b*c, c*c.
- The values are then aggregated and the final covariance value is calculated: cov(X, Y) = X*Y - Xbar * Ybar
When std is True calculate Correlation
Compute the pairwise covariance among the series of a DataFrame. The returned data frame is the covariance matrix of the columns of the DataFrame.
Both NA and null values are automatically excluded from the calculation. (See the note below about bias from missing values.) A threshold can be set for the minimum number of observations for each value created. Comparisons with observations below this threshold will be returned as
NaN
.This method is generally used for the analysis of time series data to understand the relationship between different measures across time.
Parameters: min_periods : int, optional (Not supported in Dask)
Minimum number of observations required per pair of columns to have a valid result.
Returns: DataFrame
The covariance matrix of the series of the DataFrame.
See also
Series.cov
- Compute covariance with another Series.
core.window.EWM.cov
- Exponential weighted sample covariance.
core.window.Expanding.cov
- Expanding sample covariance.
core.window.Rolling.cov
- Rolling sample covariance.
Notes
Returns the covariance matrix of the DataFrame’s time series. The covariance is normalized by N-1.
For DataFrames that have Series that are missing data (assuming that data is missing at random) the returned covariance matrix will be an unbiased estimate of the variance and covariance between the member Series.
However, for many applications this estimate may not be acceptable because the estimate covariance matrix is not guaranteed to be positive semi-definite. This could lead to estimate correlations having absolute values which are greater than one, and/or a non-invertible covariance matrix. See Estimation of covariance matrices for more details.
Examples
>>> df = pd.DataFrame([(1, 2), (0, 3), (2, 0), (1, 1)], # doctest: +SKIP ... columns=['dogs', 'cats']) >>> df.cov() # doctest: +SKIP dogs cats dogs 0.666667 -1.000000 cats -1.000000 1.666667
>>> np.random.seed(42) # doctest: +SKIP >>> df = pd.DataFrame(np.random.randn(1000, 5), # doctest: +SKIP ... columns=['a', 'b', 'c', 'd', 'e']) >>> df.cov() # doctest: +SKIP a b c d e a 0.998438 -0.020161 0.059277 -0.008943 0.014144 b -0.020161 1.059352 -0.008543 -0.024738 0.009826 c 0.059277 -0.008543 1.010670 -0.001486 -0.000271 d -0.008943 -0.024738 -0.001486 0.921297 -0.013692 e 0.014144 0.009826 -0.000271 -0.013692 0.977795
Minimum number of periods
This method also supports an optional
min_periods
keyword that specifies the required minimum number of non-NA observations for each column pair in order to have a valid result:>>> np.random.seed(42) # doctest: +SKIP >>> df = pd.DataFrame(np.random.randn(20, 3), # doctest: +SKIP ... columns=['a', 'b', 'c']) >>> df.loc[df.index[:5], 'a'] = np.nan # doctest: +SKIP >>> df.loc[df.index[5:10], 'b'] = np.nan # doctest: +SKIP >>> df.cov(min_periods=12) # doctest: +SKIP a b c a 0.316741 NaN -0.150812 b NaN 1.248003 0.191417 c -0.150812 0.191417 0.895202
-
cumcount
(axis=None)¶ Number each item in each group from 0 to the length of that group - 1.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.cumcount.
Some inconsistencies with the Dask version may exist.
Essentially this is equivalent to
>>> self.apply(lambda x: pd.Series(np.arange(len(x)), x.index)) # doctest: +SKIP
Parameters: ascending : bool, default True (Not supported in Dask)
If False, number in reverse, from length of group - 1 to 0.
Returns: Series
Sequence number of each element within each group.
See also
ngroup
- Number the groups themselves.
Examples
>>> df = pd.DataFrame([['a'], ['a'], ['a'], ['b'], ['b'], ['a']], # doctest: +SKIP ... columns=['A']) >>> df # doctest: +SKIP A 0 a 1 a 2 a 3 b 4 b 5 a >>> df.groupby('A').cumcount() # doctest: +SKIP 0 0 1 1 2 2 3 0 4 1 5 3 dtype: int64 >>> df.groupby('A').cumcount(ascending=False) # doctest: +SKIP 0 3 1 2 2 1 3 1 4 0 5 0 dtype: int64
-
cumprod
(axis=0)¶ Cumulative product for each group.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.cumprod.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame See also
Series.groupby
,DataFrame.groupby
-
cumsum
(axis=0)¶ Cumulative sum for each group.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.cumsum.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame See also
Series.groupby
,DataFrame.groupby
-
first
(split_every=None, split_out=1)¶ Compute first of group values.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.first.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame
Computed first of values within each group.
-
get_group
(key)¶ Construct DataFrame from group with provided name.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.get_group.
Some inconsistencies with the Dask version may exist.
Parameters: name : object (Not supported in Dask)
the name of the group to get as a DataFrame
obj : DataFrame, default None (Not supported in Dask)
the DataFrame to take the DataFrame out of. If it is None, the object groupby was called on will be used
Returns: group : same type as obj
-
idxmax
(split_every=None, split_out=1, axis=None, skipna=True)¶ Return index of first occurrence of maximum over requested axis. NA/null values are excluded.
This docstring was copied from pandas.core.frame.DataFrame.idxmax.
Some inconsistencies with the Dask version may exist.
Parameters: axis : {0 or ‘index’, 1 or ‘columns’}, default 0
0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA.
Returns: Series
Indexes of maxima along the specified axis.
Raises: ValueError
- If the row/column is empty
See also
Series.idxmax
Notes
This method is the DataFrame version of
ndarray.argmax
.
-
idxmin
(split_every=None, split_out=1, axis=None, skipna=True)¶ Return index of first occurrence of minimum over requested axis. NA/null values are excluded.
This docstring was copied from pandas.core.frame.DataFrame.idxmin.
Some inconsistencies with the Dask version may exist.
Parameters: axis : {0 or ‘index’, 1 or ‘columns’}, default 0
0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result will be NA.
Returns: Series
Indexes of minima along the specified axis.
Raises: ValueError
- If the row/column is empty
See also
Series.idxmin
Notes
This method is the DataFrame version of
ndarray.argmin
.
-
last
(split_every=None, split_out=1)¶ Compute last of group values.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.last.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame
Computed last of values within each group.
-
max
(split_every=None, split_out=1)¶ Compute max of group values.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.max.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame
Computed max of values within each group.
-
mean
(split_every=None, split_out=1)¶ Compute mean of groups, excluding missing values.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.mean.
Some inconsistencies with the Dask version may exist.
Returns: pandas.Series or pandas.DataFrame See also
Series.groupby
,DataFrame.groupby
Examples
>>> df = pd.DataFrame({'A': [1, 1, 2, 1, 2], # doctest: +SKIP ... 'B': [np.nan, 2, 3, 4, 5], ... 'C': [1, 2, 1, 1, 2]}, columns=['A', 'B', 'C'])
Groupby one column and return the mean of the remaining columns in each group.
>>> df.groupby('A').mean() # doctest: +SKIP B C A 1 3.0 1.333333 2 4.0 1.500000
Groupby two columns and return the mean of the remaining column.
>>> df.groupby(['A', 'B']).mean() # doctest: +SKIP C A B 1 2.0 2 4.0 1 2 3.0 1 5.0 2
Groupby one column and return the mean of only particular column in the group.
>>> df.groupby('A')['B'].mean() # doctest: +SKIP A 1 3.0 2 4.0 Name: B, dtype: float64
-
min
(split_every=None, split_out=1)¶ Compute min of group values.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.min.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame
Computed min of values within each group.
-
prod
(split_every=None, split_out=1, min_count=None)¶ Compute prod of group values.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.prod.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame
Computed prod of values within each group.
-
size
(split_every=None, split_out=1)¶ Compute group sizes.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.size.
Some inconsistencies with the Dask version may exist.
Returns: Series
Number of rows in each group.
See also
Series.groupby
,DataFrame.groupby
-
std
(ddof=1, split_every=None, split_out=1)¶ Compute standard deviation of groups, excluding missing values.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.std.
Some inconsistencies with the Dask version may exist.
For multiple groupings, the result index will be a MultiIndex.
Parameters: ddof : integer, default 1
degrees of freedom
Returns: Series or DataFrame
Standard deviation of values within each group.
See also
Series.groupby
,DataFrame.groupby
-
sum
(split_every=None, split_out=1, min_count=None)¶ Compute sum of group values.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.sum.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame
Computed sum of values within each group.
-
transform
(func, *args, **kwargs)¶ Parallel version of pandas GroupBy.transform
This mimics the pandas version except for the following:
- If the grouper does not align with the index then this causes a full shuffle. The order of rows within each group may not be preserved.
- Dask’s GroupBy.transform is not appropriate for aggregations. For custom
aggregations, use
dask.dataframe.groupby.Aggregation
.
Warning
Pandas’ groupby-transform can be used to to apply arbitrary functions, including aggregations that result in one row per group. Dask’s groupby-transform will apply
func
once to each partition-group pair, so whenfunc
is a reduction you’ll end up with one row per partition-group pair. To apply a custom aggregation with Dask, usedask.dataframe.groupby.Aggregation
.Parameters: func: function
Function to apply
args, kwargs : Scalar, Delayed or object
Arguments and keywords to pass to the function.
meta : pd.DataFrame, pd.Series, dict, iterable, tuple, optional
An empty
pd.DataFrame
orpd.Series
that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of aDataFrame
, adict
of{name: dtype}
or iterable of(name, dtype)
can be provided (note that the order of the names should match the order of the columns). Instead of a series, a tuple of(name, dtype)
can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providingmeta
is recommended. For more information, seedask.dataframe.utils.make_meta
.Returns: applied : Series or DataFrame depending on columns keyword
-
unique
(split_every=None, split_out=1)¶ Return unique values of Series object.
This docstring was copied from pandas.core.groupby.generic.SeriesGroupBy.unique.
Some inconsistencies with the Dask version may exist.
Uniques are returned in order of appearance. Hash table-based unique, therefore does NOT sort.
Returns: ndarray or ExtensionArray
The unique values returned as a NumPy array. See Notes.
See also
unique
- Top-level unique method for any 1-d array-like object.
Index.unique
- Return Index with unique values from an Index object.
Notes
Returns the unique values as a NumPy array. In case of an extension-array backed Series, a new
ExtensionArray
of that type with just the unique values is returned. This includes- Categorical
- Period
- Datetime with Timezone
- Interval
- Sparse
- IntegerNA
See Examples section.
Examples
>>> pd.Series([2, 1, 3, 3], name='A').unique() # doctest: +SKIP array([2, 1, 3])
>>> pd.Series([pd.Timestamp('2016-01-01') for _ in range(3)]).unique() # doctest: +SKIP array(['2016-01-01T00:00:00.000000000'], dtype='datetime64[ns]')
>>> pd.Series([pd.Timestamp('2016-01-01', tz='US/Eastern') # doctest: +SKIP ... for _ in range(3)]).unique() <DatetimeArray> ['2016-01-01 00:00:00-05:00'] Length: 1, dtype: datetime64[ns, US/Eastern]
An unordered Categorical will return categories in the order of appearance.
>>> pd.Series(pd.Categorical(list('baabc'))).unique() # doctest: +SKIP [b, a, c] Categories (3, object): [b, a, c]
An ordered Categorical preserves the category ordering.
>>> pd.Series(pd.Categorical(list('baabc'), categories=list('abc'), # doctest: +SKIP ... ordered=True)).unique() [b, a, c] Categories (3, object): [a < b < c]
-
var
(ddof=1, split_every=None, split_out=1)¶ Compute variance of groups, excluding missing values.
This docstring was copied from pandas.core.groupby.groupby.GroupBy.var.
Some inconsistencies with the Dask version may exist.
For multiple groupings, the result index will be a MultiIndex.
Parameters: ddof : integer, default 1
degrees of freedom
Returns: Series or DataFrame
Variance of values within each group.
See also
Series.groupby
,DataFrame.groupby
-
Custom Aggregation¶
-
class
dask.dataframe.groupby.
Aggregation
(name, chunk, agg, finalize=None)¶ User defined groupby-aggregation.
This class allows users to define their own custom aggregation in terms of operations on Pandas dataframes in a map-reduce style. You need to specify what operation to do on each chunk of data, how to combine those chunks of data together, and then how to finalize the result.
See Aggregate for more.
Parameters: name : str
the name of the aggregation. It should be unique, since intermediate result will be identified by this name.
chunk : callable
a function that will be called with the grouped column of each partition. It can either return a single series or a tuple of series. The index has to be equal to the groups.
agg : callable
a function that will be called to aggregate the results of each chunk. Again the argument(s) will be grouped series. If
chunk
returned a tuple,agg
will be called with all of them as individual positional arguments.finalize : callable
an optional finalizer that will be called with the results from the aggregation.
Examples
We could implement
sum
as follows:>>> custom_sum = dd.Aggregation( ... name='custom_sum', ... chunk=lambda s: s.sum(), ... agg=lambda s0: s0.sum() ... ) # doctest: +SKIP >>> df.groupby('g').agg(custom_sum) # doctest: +SKIP
We can implement
mean
as follows:>>> custom_mean = dd.Aggregation( ... name='custom_mean', ... chunk=lambda s: (s.count(), s.sum()), ... agg=lambda count, sum: (count.sum(), sum.sum()), ... finalize=lambda count, sum: sum / count, ... ) # doctest: +SKIP >>> df.groupby('g').agg(custom_mean) # doctest: +SKIP
Though of course, both of these are built-in and so you don’t need to implement them yourself.
Storage and Conversion¶
-
dask.dataframe.
read_csv
(urlpath, blocksize=64000000, collection=True, lineterminator=None, compression=None, sample=256000, enforce=False, assume_missing=False, storage_options=None, include_path_column=False, **kwargs)¶ Read CSV files into a Dask.DataFrame
This parallelizes the
pandas.read_csv()
function in the following ways:It supports loading many files at once using globstrings:
>>> df = dd.read_csv('myfiles.*.csv') # doctest: +SKIP
In some cases it can break up large files:
>>> df = dd.read_csv('largefile.csv', blocksize=25e6) # 25MB chunks # doctest: +SKIP
It can read CSV files from external resources (e.g. S3, HDFS) by providing a URL:
>>> df = dd.read_csv('s3://bucket/myfiles.*.csv') # doctest: +SKIP >>> df = dd.read_csv('hdfs:///myfiles.*.csv') # doctest: +SKIP >>> df = dd.read_csv('hdfs://namenode.example.com/myfiles.*.csv') # doctest: +SKIP
Internally
dd.read_csv
usespandas.read_csv()
and supports many of the same keyword arguments with the same performance guarantees. See the docstring forpandas.read_csv()
for more information on available keyword arguments.Parameters: urlpath : string or list
Absolute or relative filepath(s). Prefix with a protocol like
s3://
to read from alternative filesystems. To read from multiple files you can pass a globstring or a list of paths, with the caveat that they must all have the same protocol.blocksize : str, int or None, optional
Number of bytes by which to cut up larger files. Default value is computed based on available physical memory and the number of cores. If
None
, use a single block for each file. Can be a number like 64000000 or a string like “64MB”collection : boolean, optional
Return a dask.dataframe if True or list of dask.delayed objects if False
sample : int, optional
Number of bytes to use when determining dtypes
assume_missing : bool, optional
If True, all integer columns that aren’t specified in
dtype
are assumed to contain missing values, and are converted to floats. Default is False.storage_options : dict, optional
Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc.
include_path_column : bool or str, optional
Whether or not to include the path to each particular file. If True a new column is added to the dataframe called
path
. If str, sets new column name. Default is False.**kwargs
Extra keyword arguments to forward to
pandas.read_csv()
.Notes
Dask dataframe tries to infer the
dtype
of each column by reading a sample from the start of the file (or of the first file if it’s a glob). Usually this works fine, but if thedtype
is different later in the file (or in other files) this can cause issues. For example, if all the rows in the sample had integer dtypes, but later on there was aNaN
, then this would error at compute time. To fix this, you have a few options:- Provide explicit dtypes for the offending columns using the
dtype
keyword. This is the recommended solution. - Use the
assume_missing
keyword to assume that all columns inferred as integers contain missing values, and convert them to floats. - Increase the size of the sample using the
sample
keyword.
It should also be noted that this function may fail if a CSV file includes quoted strings that contain the line terminator. To get around this you can specify
blocksize=None
to not split files into multiple partitions, at the cost of reduced parallelism.
-
dask.dataframe.
read_table
(urlpath, blocksize=64000000, collection=True, lineterminator=None, compression=None, sample=256000, enforce=False, assume_missing=False, storage_options=None, include_path_column=False, **kwargs)¶ Read delimited files into a Dask.DataFrame
This parallelizes the
pandas.read_table()
function in the following ways:It supports loading many files at once using globstrings:
>>> df = dd.read_table('myfiles.*.csv') # doctest: +SKIP
In some cases it can break up large files:
>>> df = dd.read_table('largefile.csv', blocksize=25e6) # 25MB chunks # doctest: +SKIP
It can read CSV files from external resources (e.g. S3, HDFS) by providing a URL:
>>> df = dd.read_table('s3://bucket/myfiles.*.csv') # doctest: +SKIP >>> df = dd.read_table('hdfs:///myfiles.*.csv') # doctest: +SKIP >>> df = dd.read_table('hdfs://namenode.example.com/myfiles.*.csv') # doctest: +SKIP
Internally
dd.read_table
usespandas.read_table()
and supports many of the same keyword arguments with the same performance guarantees. See the docstring forpandas.read_table()
for more information on available keyword arguments.Parameters: urlpath : string or list
Absolute or relative filepath(s). Prefix with a protocol like
s3://
to read from alternative filesystems. To read from multiple files you can pass a globstring or a list of paths, with the caveat that they must all have the same protocol.blocksize : str, int or None, optional
Number of bytes by which to cut up larger files. Default value is computed based on available physical memory and the number of cores. If
None
, use a single block for each file. Can be a number like 64000000 or a string like “64MB”collection : boolean, optional
Return a dask.dataframe if True or list of dask.delayed objects if False
sample : int, optional
Number of bytes to use when determining dtypes
assume_missing : bool, optional
If True, all integer columns that aren’t specified in
dtype
are assumed to contain missing values, and are converted to floats. Default is False.storage_options : dict, optional
Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc.
include_path_column : bool or str, optional
Whether or not to include the path to each particular file. If True a new column is added to the dataframe called
path
. If str, sets new column name. Default is False.**kwargs
Extra keyword arguments to forward to
pandas.read_table()
.Notes
Dask dataframe tries to infer the
dtype
of each column by reading a sample from the start of the file (or of the first file if it’s a glob). Usually this works fine, but if thedtype
is different later in the file (or in other files) this can cause issues. For example, if all the rows in the sample had integer dtypes, but later on there was aNaN
, then this would error at compute time. To fix this, you have a few options:- Provide explicit dtypes for the offending columns using the
dtype
keyword. This is the recommended solution. - Use the
assume_missing
keyword to assume that all columns inferred as integers contain missing values, and convert them to floats. - Increase the size of the sample using the
sample
keyword.
It should also be noted that this function may fail if a delimited file includes quoted strings that contain the line terminator. To get around this you can specify
blocksize=None
to not split files into multiple partitions, at the cost of reduced parallelism.
-
dask.dataframe.
read_fwf
(urlpath, blocksize=64000000, collection=True, lineterminator=None, compression=None, sample=256000, enforce=False, assume_missing=False, storage_options=None, include_path_column=False, **kwargs)¶ Read fixed-width files into a Dask.DataFrame
This parallelizes the
pandas.read_fwf()
function in the following ways:It supports loading many files at once using globstrings:
>>> df = dd.read_fwf('myfiles.*.csv') # doctest: +SKIP
In some cases it can break up large files:
>>> df = dd.read_fwf('largefile.csv', blocksize=25e6) # 25MB chunks # doctest: +SKIP
It can read CSV files from external resources (e.g. S3, HDFS) by providing a URL:
>>> df = dd.read_fwf('s3://bucket/myfiles.*.csv') # doctest: +SKIP >>> df = dd.read_fwf('hdfs:///myfiles.*.csv') # doctest: +SKIP >>> df = dd.read_fwf('hdfs://namenode.example.com/myfiles.*.csv') # doctest: +SKIP
Internally
dd.read_fwf
usespandas.read_fwf()
and supports many of the same keyword arguments with the same performance guarantees. See the docstring forpandas.read_fwf()
for more information on available keyword arguments.Parameters: urlpath : string or list
Absolute or relative filepath(s). Prefix with a protocol like
s3://
to read from alternative filesystems. To read from multiple files you can pass a globstring or a list of paths, with the caveat that they must all have the same protocol.blocksize : str, int or None, optional
Number of bytes by which to cut up larger files. Default value is computed based on available physical memory and the number of cores. If
None
, use a single block for each file. Can be a number like 64000000 or a string like “64MB”collection : boolean, optional
Return a dask.dataframe if True or list of dask.delayed objects if False
sample : int, optional
Number of bytes to use when determining dtypes
assume_missing : bool, optional
If True, all integer columns that aren’t specified in
dtype
are assumed to contain missing values, and are converted to floats. Default is False.storage_options : dict, optional
Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc.
include_path_column : bool or str, optional
Whether or not to include the path to each particular file. If True a new column is added to the dataframe called
path
. If str, sets new column name. Default is False.**kwargs
Extra keyword arguments to forward to
pandas.read_fwf()
.Notes
Dask dataframe tries to infer the
dtype
of each column by reading a sample from the start of the file (or of the first file if it’s a glob). Usually this works fine, but if thedtype
is different later in the file (or in other files) this can cause issues. For example, if all the rows in the sample had integer dtypes, but later on there was aNaN
, then this would error at compute time. To fix this, you have a few options:- Provide explicit dtypes for the offending columns using the
dtype
keyword. This is the recommended solution. - Use the
assume_missing
keyword to assume that all columns inferred as integers contain missing values, and convert them to floats. - Increase the size of the sample using the
sample
keyword.
It should also be noted that this function may fail if a fixed-width file includes quoted strings that contain the line terminator. To get around this you can specify
blocksize=None
to not split files into multiple partitions, at the cost of reduced parallelism.
-
dask.dataframe.
read_parquet
(path, columns=None, filters=None, categories=None, index=None, storage_options=None, engine='auto', gather_statistics=None, split_row_groups=True, chunksize=None, **kwargs)¶ Read a Parquet file into a Dask DataFrame
This reads a directory of Parquet data into a Dask.dataframe, one file per partition. It selects the index among the sorted columns if any exist.
Parameters: path : string or list
Source directory for data, or path(s) to individual parquet files. Prefix with a protocol like
s3://
to read from alternative filesystems. To read from multiple files you can pass a globstring or a list of paths, with the caveat that they must all have the same protocol.columns : string, list or None (default)
Field name(s) to read in as columns in the output. By default all non-index fields will be read (as determined by the pandas parquet metadata, if present). Provide a single field name instead of a list to read in the data as a Series.
filters : Union[List[Tuple[str, str, Any]], List[List[Tuple[str, str, Any]]]]
List of filters to apply, like
[[('x', '=', 0), ...], ...]
. This implements partition-level (hive) filtering only, i.e., to prevent the loading of some row-groups and/or files.Predicates can be expressed in disjunctive normal form (DNF). This means that the innermost tuple describes a single column predicate. These inner predicates are combined with an AND conjunction into a larger predicate. The outer-most list then combines all of the combined filters with an OR disjunction.
Predicates can also be expressed as a List[Tuple]. These are evaluated as an AND conjunction. To express OR in predictates, one must use the (preferred) List[List[Tuple]] notation.
index : string, list, False or None (default)
Field name(s) to use as the output frame index. By default will be inferred from the pandas parquet file metadata (if present). Use False to read all fields as columns.
categories : list, dict or None
For any fields listed here, if the parquet encoding is Dictionary, the column will be created with dtype category. Use only if it is guaranteed that the column is encoded as dictionary in all row-groups. If a list, assumes up to 2**16-1 labels; if a dict, specify the number of labels expected; if None, will load categories automatically for data written by dask/fastparquet, not otherwise.
storage_options : dict
Key/value pairs to be passed on to the file-system backend, if any.
engine : {‘auto’, ‘fastparquet’, ‘pyarrow’}, default ‘auto’
Parquet reader library to use. If only one library is installed, it will use that one; if both, it will use ‘fastparquet’
gather_statistics : bool or None (default).
Gather the statistics for each dataset partition. By default, this will only be done if the _metadata file is available. Otherwise, statistics will only be gathered if True, because the footer of every file will be parsed (which is very slow on some systems).
split_row_groups : bool
If True (default) then output dataframe partitions will correspond to parquet-file row-groups (when enough row-group metadata is available). Otherwise, partitions correspond to distinct files. Only the “pyarrow” engine currently supports this argument.
chunksize : int, str
The target task partition size. If set, consecutive row-groups from the same file will be aggregated into the same output partition until the aggregate size reaches this value.
**kwargs: dict (of dicts)
Passthrough key-word arguments for read backend. The top-level keys correspond to the appropriate operation type, and the second level corresponds to the kwargs that will be passed on to the underlying pyarrow or fastparquet function. Supported top-level keys: ‘dataset’ (for opening a pyarrow dataset), ‘file’ (for opening a fastparquet ParquetFile), and ‘read’ (for the backend read function)
See also
Examples
>>> df = dd.read_parquet('s3://bucket/my-parquet-data') # doctest: +SKIP
-
dask.dataframe.
read_orc
(path, columns=None, storage_options=None)¶ Read dataframe from ORC file(s)
Parameters: path: str or list(str)
Location of file(s), which can be a full URL with protocol specifier, and may include glob character if a single string.
columns: None or list(str)
Columns to load. If None, loads all.
storage_options: None or dict
Further parameters to pass to the bytes backend.
Returns: Dask.DataFrame (even if there is only one column)
Examples
>>> df = dd.read_orc('https://github.com/apache/orc/raw/' ... 'master/examples/demo-11-zlib.orc') # doctest: +SKIP
-
dask.dataframe.
read_hdf
(pattern, key, start=0, stop=None, columns=None, chunksize=1000000, sorted_index=False, lock=True, mode='a')¶ Read HDF files into a Dask DataFrame
Read hdf files into a dask dataframe. This function is like
pandas.read_hdf
, except it can read from a single large file, or from multiple files, or from multiple keys from the same file.Parameters: pattern : string, pathlib.Path, list
File pattern (string), pathlib.Path, buffer to read from, or list of file paths. Can contain wildcards.
key : group identifier in the store. Can contain wildcards
start : optional, integer (defaults to 0), row number to start at
stop : optional, integer (defaults to None, the last row), row number to
stop at
columns : list of columns, optional
A list of columns that if not None, will limit the return columns (default is None)
chunksize : positive integer, optional
Maximal number of rows per partition (default is 1000000).
sorted_index : boolean, optional
Option to specify whether or not the input hdf files have a sorted index (default is False).
lock : boolean, optional
Option to use a lock to prevent concurrency issues (default is True).
mode : {‘a’, ‘r’, ‘r+’}, default ‘a’. Mode to use when opening file(s).
- ‘r’
Read-only; no data can be modified.
- ‘a’
Append; an existing file is opened for reading and writing, and if the file does not exist it is created.
- ‘r+’
It is similar to ‘a’, but the file must already exist.
Returns: dask.DataFrame
Examples
Load single file
>>> dd.read_hdf('myfile.1.hdf5', '/x') # doctest: +SKIP
Load multiple files
>>> dd.read_hdf('myfile.*.hdf5', '/x') # doctest: +SKIP
>>> dd.read_hdf(['myfile.1.hdf5', 'myfile.2.hdf5'], '/x') # doctest: +SKIP
Load multiple datasets
>>> dd.read_hdf('myfile.1.hdf5', '/*') # doctest: +SKIP
-
dask.dataframe.
read_json
(url_path, orient='records', lines=None, storage_options=None, blocksize=None, sample=1048576, encoding='utf-8', errors='strict', compression='infer', meta=None, engine=<function read_json>, **kwargs)¶ Create a dataframe from a set of JSON files
This utilises
pandas.read_json()
, and most parameters are passed through - see its docstring.Differences: orient is ‘records’ by default, with lines=True; this is appropriate for line-delimited “JSON-lines” data, the kind of JSON output that is most common in big-data scenarios, and which can be chunked when reading (see
read_json()
). All other options require blocksize=None, i.e., one partition per input file.Parameters: url_path: str, list of str
Location to read from. If a string, can include a glob character to find a set of file names. Supports protocol specifications such as
"s3://"
.encoding, errors:
The text encoding to implement, e.g., “utf-8” and how to respond to errors in the conversion (see
str.encode()
).orient, lines, kwargs
passed to pandas; if not specified, lines=True when orient=’records’, False otherwise.
storage_options: dict
Passed to backend file-system implementation
blocksize: None or int
If None, files are not blocked, and you get one partition per input file. If int, which can only be used for line-delimited JSON files, each partition will be approximately this size in bytes, to the nearest newline character.
sample: int
Number of bytes to pre-load, to provide an empty dataframe structure to any blocks without data. Only relevant is using blocksize.
encoding, errors:
Text conversion,
see bytes.decode()
compression : string or None
String like ‘gzip’ or ‘xz’.
engine : function object, default
pd.read_json
The underlying function that dask will use to read JSON files. By default, this will be the pandas JSON reader (
pd.read_json
).meta : pd.DataFrame, pd.Series, dict, iterable, tuple, optional
An empty
pd.DataFrame
orpd.Series
that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of aDataFrame
, adict
of{name: dtype}
or iterable of(name, dtype)
can be provided (note that the order of the names should match the order of the columns). Instead of a series, a tuple of(name, dtype)
can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providingmeta
is recommended. For more information, seedask.dataframe.utils.make_meta
.Returns: dask.DataFrame
Examples
Load single file
>>> dd.read_json('myfile.1.json') # doctest: +SKIP
Load multiple files
>>> dd.read_json('myfile.*.json') # doctest: +SKIP
>>> dd.read_json(['myfile.1.json', 'myfile.2.json']) # doctest: +SKIP
Load large line-delimited JSON files using partitions of approx 256MB size
>> dd.read_json(‘data/file*.csv’, blocksize=2**28)
-
dask.dataframe.
read_sql_table
(table, uri, index_col, divisions=None, npartitions=None, limits=None, columns=None, bytes_per_chunk=268435456, head_rows=5, schema=None, meta=None, engine_kwargs=None, **kwargs)¶ Create dataframe from an SQL table.
If neither divisions or npartitions is given, the memory footprint of the first few rows will be determined, and partitions of size ~256MB will be used.
Parameters: table : string or sqlalchemy expression
Select columns from here.
uri : string
Full sqlalchemy URI for the database connection
index_col : string
Column which becomes the index, and defines the partitioning. Should be a indexed column in the SQL server, and any orderable type. If the type is number or time, then partition boundaries can be inferred from npartitions or bytes_per_chunk; otherwide must supply explicit
divisions=
.index_col
could be a function to return a value, e.g.,sql.func.abs(sql.column('value')).label('abs(value)')
.index_col=sql.func.abs(sql.column("value")).label("abs(value)")
, orindex_col=cast(sql.column("id"),types.BigInteger).label("id")
to convert the textfieldid
toBigInteger
.Note
sql
,cast
,types
methods comes fromesqlalchemy
module.Labeling columns created by functions or arithmetic operations is required.
divisions: sequence
Values of the index column to split the table by. If given, this will override npartitions and bytes_per_chunk. The divisions are the value boundaries of the index column used to define the partitions. For example,
divisions=list('acegikmoqsuwz')
could be used to partition a string column lexographically into 12 partitions, with the implicit assumption that each partition contains similar numbers of records.npartitions : int
Number of partitions, if divisions is not given. Will split the values of the index column linearly between limits, if given, or the column max/min. The index column must be numeric or time for this to work
limits: 2-tuple or None
Manually give upper and lower range of values for use with npartitions; if None, first fetches max/min from the DB. Upper limit, if given, is inclusive.
columns : list of strings or None
Which columns to select; if None, gets all; can include sqlalchemy functions, e.g.,
sql.func.abs(sql.column('value')).label('abs(value)')
. Labeling columns created by functions or arithmetic operations is recommended.bytes_per_chunk : int
If both divisions and npartitions is None, this is the target size of each partition, in bytes
head_rows : int
How many rows to load for inferring the data-types, unless passing meta
meta : empty DataFrame or None
If provided, do not attempt to infer dtypes, but use these, coercing all chunks on load
schema : str or None
If using a table name, pass this to sqlalchemy to select which DB schema to use within the URI connection
engine_kwargs : dict or None
Specific db engine parameters for sqlalchemy
kwargs : dict
Additional parameters to pass to pd.read_sql()
Returns: dask.dataframe
Examples
>>> df = dd.read_sql_table('accounts', 'sqlite:///path/to/bank.db', ... npartitions=10, index_col='id') # doctest: +SKIP
-
dask.dataframe.
from_array
(x, chunksize=50000, columns=None)¶ Read any slicable array into a Dask Dataframe
Uses getitem syntax to pull slices out of the array. The array need not be a NumPy array but must support slicing syntax
x[50000:100000]and have 2 dimensions:
x.ndim == 2or have a record dtype:
x.dtype == [(‘name’, ‘O’), (‘balance’, ‘i8’)]
-
dask.dataframe.
from_pandas
(data, npartitions=None, chunksize=None, sort=True, name=None)¶ Construct a Dask DataFrame from a Pandas DataFrame
This splits an in-memory Pandas dataframe into several parts and constructs a dask.dataframe from those parts on which Dask.dataframe can operate in parallel.
Note that, despite parallelism, Dask.dataframe may not always be faster than Pandas. We recommend that you stay with Pandas for as long as possible before switching to Dask.dataframe.
Parameters: data : pandas.DataFrame or pandas.Series
The DataFrame/Series with which to construct a Dask DataFrame/Series
npartitions : int, optional
The number of partitions of the index to create. Note that depending on the size and index of the dataframe, the output may have fewer partitions than requested.
chunksize : int, optional
The number of rows per index partition to use.
sort: bool
Sort input first to obtain cleanly divided partitions or don’t sort and don’t get cleanly divided partitions
name: string, optional
An optional keyname for the dataframe. Defaults to hashing the input
Returns: dask.DataFrame or dask.Series
A dask DataFrame/Series partitioned along the index
Raises: TypeError
If something other than a
pandas.DataFrame
orpandas.Series
is passed in.See also
from_array
- Construct a dask.DataFrame from an array that has record dtype
read_csv
- Construct a dask.DataFrame from a CSV file
Examples
>>> df = pd.DataFrame(dict(a=list('aabbcc'), b=list(range(6))), ... index=pd.date_range(start='20100101', periods=6)) >>> ddf = from_pandas(df, npartitions=3) >>> ddf.divisions # doctest: +NORMALIZE_WHITESPACE (Timestamp('2010-01-01 00:00:00', freq='D'), Timestamp('2010-01-03 00:00:00', freq='D'), Timestamp('2010-01-05 00:00:00', freq='D'), Timestamp('2010-01-06 00:00:00', freq='D')) >>> ddf = from_pandas(df.a, npartitions=3) # Works with Series too! >>> ddf.divisions # doctest: +NORMALIZE_WHITESPACE (Timestamp('2010-01-01 00:00:00', freq='D'), Timestamp('2010-01-03 00:00:00', freq='D'), Timestamp('2010-01-05 00:00:00', freq='D'), Timestamp('2010-01-06 00:00:00', freq='D'))
-
dask.dataframe.
from_bcolz
(x, chunksize=None, categorize=True, index=None, lock=<unlocked _thread.lock object>, **kwargs)¶ Read BColz CTable into a Dask Dataframe
BColz is a fast on-disk compressed column store with careful attention given to compression. https://bcolz.readthedocs.io/en/latest/
Parameters: x : bcolz.ctable
chunksize : int, optional
The size(rows) of blocks to pull out from ctable.
categorize : bool, defaults to True
Automatically categorize all string dtypes
index : string, optional
Column to make the index
lock: bool or Lock
Lock to use when reading or False for no lock (not-thread-safe)
See also
from_array
- more generic function not optimized for bcolz
-
dask.dataframe.
from_dask_array
(x, columns=None, index=None)¶ Create a Dask DataFrame from a Dask Array.
Converts a 2d array into a DataFrame and a 1d array into a Series.
Parameters: x : da.Array
columns : list or string
list of column names if DataFrame, single string if Series
index : dask.dataframe.Index, optional
An optional dask Index to use for the output Series or DataFrame.
The default output index depends on whether x has any unknown chunks. If there are any unknown chunks, the output has
None
for all the divisions (one per chunk). If all the chunks are known, a default index with known divsions is created.Specifying index can be useful if you’re conforming a Dask Array to an existing dask Series or DataFrame, and you would like the indices to match.
See also
dask.bag.to_dataframe
- from dask.bag
dask.dataframe._Frame.values
- Reverse conversion
dask.dataframe._Frame.to_records
- Reverse conversion
Examples
>>> import dask.array as da >>> import dask.dataframe as dd >>> x = da.ones((4, 2), chunks=(2, 2)) >>> df = dd.io.from_dask_array(x, columns=['a', 'b']) >>> df.compute() a b 0 1.0 1.0 1 1.0 1.0 2 1.0 1.0 3 1.0 1.0
-
dask.dataframe.
from_delayed
(dfs, meta=None, divisions=None, prefix='from-delayed', verify_meta=True)¶ Create Dask DataFrame from many Dask Delayed objects
Parameters: dfs : list of Delayed
An iterable of
dask.delayed.Delayed
objects, such as come fromdask.delayed
These comprise the individual partitions of the resulting dataframe.meta : pd.DataFrame, pd.Series, dict, iterable, tuple, optional
An empty
pd.DataFrame
orpd.Series
that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of aDataFrame
, adict
of{name: dtype}
or iterable of(name, dtype)
can be provided (note that the order of the names should match the order of the columns). Instead of a series, a tuple of(name, dtype)
can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providingmeta
is recommended. For more information, seedask.dataframe.utils.make_meta
.divisions : tuple, str, optional
Partition boundaries along the index. For tuple, see https://docs.dask.org/en/latest/dataframe-design.html#partitions For string ‘sorted’ will compute the delayed values to find index values. Assumes that the indexes are mutually sorted. If None, then won’t use index information
prefix : str, optional
Prefix to prepend to the keys.
verify_meta : bool, optional
If True check that the partitions have consistent metadata, defaults to True.
-
dask.dataframe.
to_records
(df)¶ Create Dask Array from a Dask Dataframe
Warning: This creates a dask.array without precise shape information. Operations that depend on shape information, like slicing or reshaping, will not work.
See also
dask.dataframe._Frame.values
,dask.dataframe.from_dask_array
Examples
>>> df.to_records() # doctest: +SKIP dask.array<to_records, shape=(nan,), dtype=(numpy.record, [('ind', '<f8'), ('x', 'O'), ('y', '<i8')]), chunksize=(nan,), chunktype=numpy.ndarray> # noqa: E501
-
dask.dataframe.
to_csv
(df, filename, single_file=False, encoding='utf-8', mode='wt', name_function=None, compression=None, compute=True, scheduler=None, storage_options=None, header_first_partition_only=None, **kwargs)¶ Store Dask DataFrame to CSV files
One filename per partition will be created. You can specify the filenames in a variety of ways.
Use a globstring:
>>> df.to_csv('/path/to/data/export-*.csv')
The * will be replaced by the increasing sequence 0, 1, 2, …
/path/to/data/export-0.csv /path/to/data/export-1.csv
Use a globstring and a
name_function=
keyword argument. The name_function function should expect an integer and produce a string. Strings produced by name_function must preserve the order of their respective partition indices.>>> from datetime import date, timedelta >>> def name(i): ... return str(date(2015, 1, 1) + i * timedelta(days=1))
>>> name(0) '2015-01-01' >>> name(15) '2015-01-16'
>>> df.to_csv('/path/to/data/export-*.csv', name_function=name) # doctest: +SKIP
/path/to/data/export-2015-01-01.csv /path/to/data/export-2015-01-02.csv ...
You can also provide an explicit list of paths:
>>> paths = ['/path/to/data/alice.csv', '/path/to/data/bob.csv', ...] >>> df.to_csv(paths)
Parameters: filename : string
Path glob indicating the naming scheme for the output files
name_function : callable, default None
Function accepting an integer (partition index) and producing a string to replace the asterisk in the given filename globstring. Should preserve the lexicographic order of partitions. Not supported when single_file is True.
single_file : bool, default False
Whether to save everything into a single CSV file. Under the single file mode, each partition is appended at the end of the specified CSV file. Note that not all filesystems support the append mode and thus the single file mode, especially on cloud storage systems such as S3 or GCS. A warning will be issued when writing to a file that is not backed by a local filesystem.
compression : string or None
String like ‘gzip’ or ‘xz’. Must support efficient random access. Filenames with extensions corresponding to known compression algorithms (gz, bz2) will be compressed accordingly automatically
sep : character, default ‘,’
Field delimiter for the output file
na_rep : string, default ‘’
Missing data representation
float_format : string, default None
Format string for floating point numbers
columns : sequence, optional
Columns to write
header : boolean or list of string, default True
Write out column names. If a list of string is given it is assumed to be aliases for the column names
header_first_partition_only : boolean, default None
If set to True, only write the header row in the first output file. By default, headers are written to all partitions under the multiple file mode (single_file is False) and written only once under the single file mode (single_file is True). It must not be False under the single file mode.
index : boolean, default True
Write row names (index)
index_label : string or sequence, or False, default None
Column label for index column(s) if desired. If None is given, and header and index are True, then the index names are used. A sequence should be given if the DataFrame uses MultiIndex. If False do not print fields for index names. Use index_label=False for easier importing in R
nanRep : None
deprecated, use na_rep
mode : str
Python write mode, default ‘w’
encoding : string, optional
A string representing the encoding to use in the output file, defaults to ‘ascii’ on Python 2 and ‘utf-8’ on Python 3.
compression : string, optional
a string representing the compression to use in the output file, allowed values are ‘gzip’, ‘bz2’, ‘xz’, only used when the first argument is a filename
line_terminator : string, default ‘n’
The newline character or character sequence to use in the output file
quoting : optional constant from csv module
defaults to csv.QUOTE_MINIMAL
quotechar : string (length 1), default ‘”’
character used to quote fields
doublequote : boolean, default True
Control quoting of quotechar inside a field
escapechar : string (length 1), default None
character used to escape sep and quotechar when appropriate
chunksize : int or None
rows to write at a time
tupleize_cols : boolean, default False
write multi_index columns as a list of tuples (if True) or new (expanded format) if False)
date_format : string, default None
Format string for datetime objects
decimal: string, default ‘.’
Character recognized as decimal separator. E.g. use ‘,’ for European data
storage_options: dict
Parameters passed on to the backend filesystem class.
Returns: The names of the file written if they were computed right away
If not, the delayed tasks associated to the writing of the files
Raises: ValueError
If header_first_partition_only is set to False or name_function is specified when single_file is True.
-
dask.dataframe.
to_bag
(df, index=False)¶ Create Dask Bag from a Dask DataFrame
Parameters: index : bool, optional
If True, the elements are tuples of
(index, value)
, otherwise they’re just thevalue
. Default is False.Examples
>>> bag = df.to_bag() # doctest: +SKIP
-
dask.dataframe.
to_hdf
(df, path, key, mode='a', append=False, scheduler=None, name_function=None, compute=True, lock=None, dask_kwargs={}, **kwargs)¶ Store Dask Dataframe to Hierarchical Data Format (HDF) files
This is a parallel version of the Pandas function of the same name. Please see the Pandas docstring for more detailed information about shared keyword arguments.
This function differs from the Pandas version by saving the many partitions of a Dask DataFrame in parallel, either to many files, or to many datasets within the same file. You may specify this parallelism with an asterix
*
within the filename or datapath, and an optionalname_function
. The asterix will be replaced with an increasing sequence of integers starting from0
or with the result of callingname_function
on each of those integers.This function only supports the Pandas
'table'
format, not the more specialized'fixed'
format.Parameters: path : string, pathlib.Path
Path to a target filename. Supports strings,
pathlib.Path
, or any object implementing the__fspath__
protocol. May contain a*
to denote many filenames.key : string
Datapath within the files. May contain a
*
to denote many locationsname_function : function
A function to convert the
*
in the above options to a string. Should take in a number from 0 to the number of partitions and return a string. (see examples below)compute : bool
Whether or not to execute immediately. If False then this returns a
dask.Delayed
value.lock : Lock, optional
Lock to use to prevent concurrency issues. By default a
threading.Lock
,multiprocessing.Lock
orSerializableLock
will be used depending on your scheduler if a lock is required. See dask.utils.get_scheduler_lock for more information about lock selection.scheduler : string
The scheduler to use, like “threads” or “processes”
**other:
See pandas.to_hdf for more information
Returns: filenames : list
Returned if
compute
is True. List of file names that each partition is saved to.delayed : dask.Delayed
Returned if
compute
is False. Delayed object to executeto_hdf
when computed.See also
Examples
Save Data to a single file
>>> df.to_hdf('output.hdf', '/data') # doctest: +SKIP
Save data to multiple datapaths within the same file:
>>> df.to_hdf('output.hdf', '/data-*') # doctest: +SKIP
Save data to multiple files:
>>> df.to_hdf('output-*.hdf', '/data') # doctest: +SKIP
Save data to multiple files, using the multiprocessing scheduler:
>>> df.to_hdf('output-*.hdf', '/data', scheduler='processes') # doctest: +SKIP
Specify custom naming scheme. This writes files as ‘2000-01-01.hdf’, ‘2000-01-02.hdf’, ‘2000-01-03.hdf’, etc..
>>> from datetime import date, timedelta >>> base = date(year=2000, month=1, day=1) >>> def name_function(i): ... ''' Convert integer 0 to n to a string ''' ... return base + timedelta(days=i)
>>> df.to_hdf('*.hdf', '/data', name_function=name_function) # doctest: +SKIP
-
dask.dataframe.
to_parquet
(df, path, engine='auto', compression='default', write_index=True, append=False, ignore_divisions=False, partition_on=None, storage_options=None, write_metadata_file=True, compute=True, **kwargs)¶ Store Dask.dataframe to Parquet files
Parameters: df : dask.dataframe.DataFrame
path : string or pathlib.Path
Destination directory for data. Prepend with protocol like
s3://
orhdfs://
for remote data.engine : {‘auto’, ‘fastparquet’, ‘pyarrow’}, default ‘auto’
Parquet library to use. If only one library is installed, it will use that one; if both, it will use ‘fastparquet’.
compression : string or dict, optional
Either a string like
"snappy"
or a dictionary mapping column names to compressors like{"name": "gzip", "values": "snappy"}
. The default is"default"
, which uses the default compression for whichever engine is selected.write_index : boolean, optional
Whether or not to write the index. Defaults to True.
append : bool, optional
If False (default), construct data-set from scratch. If True, add new row-group(s) to an existing data-set. In the latter case, the data-set must exist, and the schema must match the input data.
ignore_divisions : bool, optional
If False (default) raises error when previous divisions overlap with the new appended divisions. Ignored if append=False.
partition_on : list, optional
Construct directory-based partitioning by splitting on these fields’ values. Each dask partition will result in one or more datafiles, there will be no global groupby.
storage_options : dict, optional
Key/value pairs to be passed on to the file-system backend, if any.
write_metadata_file : bool, optional
Whether to write the special “_metadata” file.
compute : bool, optional
If True (default) then the result is computed immediately. If False then a
dask.delayed
object is returned for future computation.**kwargs :
Extra options to be passed on to the specific backend.
See also
read_parquet
- Read parquet data to dask.dataframe
Notes
Each partition will be written to a separate file.
Examples
>>> df = dd.read_csv(...) # doctest: +SKIP >>> dd.to_parquet(df, '/path/to/output/',...) # doctest: +SKIP
-
dask.dataframe.
to_json
(df, url_path, orient='records', lines=None, storage_options=None, compute=True, encoding='utf-8', errors='strict', compression=None, **kwargs)¶ Write dataframe into JSON text files
This utilises
pandas.DataFrame.to_json()
, and most parameters are passed through - see its docstring.Differences: orient is ‘records’ by default, with lines=True; this produces the kind of JSON output that is most common in big-data applications, and which can be chunked when reading (see
read_json()
).Parameters: df: dask.DataFrame
Data to save
url_path: str, list of str
Location to write to. If a string, and there are more than one partitions in df, should include a glob character to expand into a set of file names, or provide a
name_function=
parameter. Supports protocol specifications such as"s3://"
.encoding, errors:
The text encoding to implement, e.g., “utf-8” and how to respond to errors in the conversion (see
str.encode()
).orient, lines, kwargs
passed to pandas; if not specified, lines=True when orient=’records’, False otherwise.
storage_options: dict
Passed to backend file-system implementation
compute: bool
If true, immediately executes. If False, returns a set of delayed objects, which can be computed at a later time.
encoding, errors:
Text conversion,
see str.encode()
compression : string or None
String like ‘gzip’ or ‘xz’.
Rolling¶
-
dask.dataframe.rolling.
map_overlap
(func, df, before, after, *args, **kwargs)¶ Apply a function to each partition, sharing rows with adjacent partitions.
Parameters: func : function
Function applied to each partition.
df : dd.DataFrame, dd.Series
before : int or timedelta
The rows to prepend to partition
i
from the end of partitioni - 1
.after : int or timedelta
The rows to append to partition
i
from the beginning of partitioni + 1
.args, kwargs :
Arguments and keywords to pass to the function. The partition will be the first argument, and these will be passed after.
See also
dd.DataFrame.map_overlap
Resampling¶
-
class
dask.dataframe.tseries.resample.
Resampler
(obj, rule, **kwargs)¶ Class for resampling timeseries data.
This class is commonly encountered when using
obj.resample(...)
which returnResampler
objects.Parameters: obj : Dask DataFrame or Series
Data to be resampled.
rule : str, tuple, datetime.timedelta, DateOffset or None
The offset string or object representing the target conversion.
kwargs : optional
Keyword arguments passed to underlying pandas resampling function.
Returns: Resampler instance of the appropriate type
-
agg
(agg_funcs, *args, **kwargs)¶ Aggregate using one or more operations over the specified axis.
This docstring was copied from pandas.core.resample.Resampler.agg.
Some inconsistencies with the Dask version may exist.
Parameters: func : function, str, list or dict (Not supported in Dask)
Function to use for aggregating the data. If a function, must either work when passed a DataFrame or when passed to DataFrame.apply.
Accepted combinations are:
- function
- string function name
- list of functions and/or function names, e.g.
[np.sum, 'mean']
- dict of axis labels -> functions, function names or list of such.
*args
Positional arguments to pass to func.
**kwargs
Keyword arguments to pass to func.
Returns: scalar, Series or DataFrame
The return can be:
- scalar : when Series.agg is called with single function
- Series : when DataFrame.agg is called with a single function
- DataFrame : when DataFrame.agg is called with several functions
Return scalar, Series or DataFrame.
See also
DataFrame.groupby.aggregate
,DataFrame.resample.transform
,DataFrame.aggregate
Notes
agg is an alias for aggregate. Use the alias.
A passed user-defined-function will be passed a Series for evaluation.
Examples
>>> s = pd.Series([1,2,3,4,5], # doctest: +SKIP index=pd.date_range('20130101', periods=5,freq='s')) 2013-01-01 00:00:00 1 2013-01-01 00:00:01 2 2013-01-01 00:00:02 3 2013-01-01 00:00:03 4 2013-01-01 00:00:04 5 Freq: S, dtype: int64
>>> r = s.resample('2s') # doctest: +SKIP DatetimeIndexResampler [freq=<2 * Seconds>, axis=0, closed=left, label=left, convention=start, base=0]
>>> r.agg(np.sum) # doctest: +SKIP 2013-01-01 00:00:00 3 2013-01-01 00:00:02 7 2013-01-01 00:00:04 5 Freq: 2S, dtype: int64
>>> r.agg(['sum','mean','max']) # doctest: +SKIP sum mean max 2013-01-01 00:00:00 3 1.5 2 2013-01-01 00:00:02 7 3.5 4 2013-01-01 00:00:04 5 5.0 5
>>> r.agg({'result' : lambda x: x.mean() / x.std(), # doctest: +SKIP 'total' : np.sum}) total result 2013-01-01 00:00:00 3 2.121320 2013-01-01 00:00:02 7 4.949747 2013-01-01 00:00:04 5 NaN
-
count
()¶ Compute count of group, excluding missing values.
This docstring was copied from pandas.core.resample.Resampler.count.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame
Count of values within each group.
See also
Series.groupby
,DataFrame.groupby
-
first
()¶ Compute first of group values.
This docstring was copied from pandas.core.resample.Resampler.first.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame
Computed first of values within each group.
-
last
()¶ Compute last of group values.
This docstring was copied from pandas.core.resample.Resampler.last.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame
Computed last of values within each group.
-
max
()¶ Compute max of group values.
This docstring was copied from pandas.core.resample.Resampler.max.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame
Computed max of values within each group.
-
mean
()¶ Compute mean of groups, excluding missing values.
This docstring was copied from pandas.core.resample.Resampler.mean.
Some inconsistencies with the Dask version may exist.
Returns: pandas.Series or pandas.DataFrame See also
Series.groupby
,DataFrame.groupby
Examples
>>> df = pd.DataFrame({'A': [1, 1, 2, 1, 2], # doctest: +SKIP ... 'B': [np.nan, 2, 3, 4, 5], ... 'C': [1, 2, 1, 1, 2]}, columns=['A', 'B', 'C'])
Groupby one column and return the mean of the remaining columns in each group.
>>> df.groupby('A').mean() # doctest: +SKIP B C A 1 3.0 1.333333 2 4.0 1.500000
Groupby two columns and return the mean of the remaining column.
>>> df.groupby(['A', 'B']).mean() # doctest: +SKIP C A B 1 2.0 2 4.0 1 2 3.0 1 5.0 2
Groupby one column and return the mean of only particular column in the group.
>>> df.groupby('A')['B'].mean() # doctest: +SKIP A 1 3.0 2 4.0 Name: B, dtype: float64
-
median
()¶ Compute median of groups, excluding missing values.
This docstring was copied from pandas.core.resample.Resampler.median.
Some inconsistencies with the Dask version may exist.
For multiple groupings, the result index will be a MultiIndex
Returns: Series or DataFrame
Median of values within each group.
See also
Series.groupby
,DataFrame.groupby
-
min
()¶ Compute min of group values.
This docstring was copied from pandas.core.resample.Resampler.min.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame
Computed min of values within each group.
-
nunique
()¶ Return number of unique elements in the group.
This docstring was copied from pandas.core.resample.Resampler.nunique.
Some inconsistencies with the Dask version may exist.
Returns: Series
Number of unique values within each group.
-
ohlc
()¶ Compute sum of values, excluding missing values.
This docstring was copied from pandas.core.resample.Resampler.ohlc.
Some inconsistencies with the Dask version may exist.
For multiple groupings, the result index will be a MultiIndex
Returns: DataFrame
Open, high, low and close values within each group.
See also
Series.groupby
,DataFrame.groupby
-
prod
()¶ Compute prod of group values.
This docstring was copied from pandas.core.resample.Resampler.prod.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame
Computed prod of values within each group.
-
quantile
()¶ Return value at the given quantile.
This docstring was copied from pandas.core.resample.Resampler.quantile.
Some inconsistencies with the Dask version may exist.
New in version 0.24.0.
Parameters: q : float or array-like, default 0.5 (50% quantile) (Not supported in Dask)
Returns: DataFrame or Series
Quantile of values within each group.
See also
Series.quantile
,DataFrame.quantile
,DataFrameGroupBy.quantile
-
sem
()¶ Compute standard error of the mean of groups, excluding missing values.
This docstring was copied from pandas.core.resample.Resampler.sem.
Some inconsistencies with the Dask version may exist.
For multiple groupings, the result index will be a MultiIndex.
Parameters: ddof : integer, default 1
degrees of freedom
Returns: Series or DataFrame
Standard error of the mean of values within each group.
See also
Series.groupby
,DataFrame.groupby
-
size
()¶ Compute group sizes.
This docstring was copied from pandas.core.resample.Resampler.size.
Some inconsistencies with the Dask version may exist.
Returns: Series
Number of rows in each group.
See also
Series.groupby
,DataFrame.groupby
-
std
()¶ Compute standard deviation of groups, excluding missing values.
This docstring was copied from pandas.core.resample.Resampler.std.
Some inconsistencies with the Dask version may exist.
Parameters: ddof : integer, default 1 (Not supported in Dask)
Degrees of freedom.
Returns: DataFrame or Series
Standard deviation of values within each group.
-
sum
()¶ Compute sum of group values.
This docstring was copied from pandas.core.resample.Resampler.sum.
Some inconsistencies with the Dask version may exist.
Returns: Series or DataFrame
Computed sum of values within each group.
-
var
()¶ Compute variance of groups, excluding missing values.
This docstring was copied from pandas.core.resample.Resampler.var.
Some inconsistencies with the Dask version may exist.
Parameters: ddof : integer, default 1 (Not supported in Dask)
degrees of freedom
Returns: DataFrame or Series
Variance of values within each group.
-
Dask Metadata¶
-
dask.dataframe.utils.
make_meta
(arg, *args, **kwargs)¶ Create an empty pandas object containing the desired metadata.
Parameters: x : dict, tuple, list, pd.Series, pd.DataFrame, pd.Index, dtype, scalar
To create a DataFrame, provide a dict mapping of {name: dtype}, or an iterable of (name, dtype) tuples. To create a Series, provide a tuple of (name, dtype). If a pandas object, names, dtypes, and index should match the desired output. If a dtype or scalar, a scalar of the same dtype is returned.
index : pd.Index, optional
Any pandas index to use in the metadata. If none provided, a RangeIndex will be used.
Examples
>>> make_meta([('a', 'i8'), ('b', 'O')]) Empty DataFrame Columns: [a, b] Index: [] >>> make_meta(('a', 'f8')) Series([], Name: a, dtype: float64) >>> make_meta('i8') 1
Other functions¶
-
dask.dataframe.
compute
(*args, **kwargs)¶ Compute several dask collections at once.
Parameters: args : object
Any number of objects. If it is a dask object, it’s computed and the result is returned. By default, python builtin collections are also traversed to look for dask objects (for more information see the
traverse
keyword). Non-dask arguments are passed through unchanged.traverse : bool, optional
By default dask traverses builtin python collections looking for dask objects passed to
compute
. For large collections this can be expensive. If none of the arguments contain any dask objects, settraverse=False
to avoid doing this traversal.scheduler : string, optional
Which scheduler to use like “threads”, “synchronous” or “processes”. If not provided, the default is to check the global settings first, and then fall back to the collection defaults.
optimize_graph : bool, optional
If True [default], the optimizations for each collection are applied before computation. Otherwise the graph is run as is. This can be useful for debugging.
kwargs
Extra keywords to forward to the scheduler function.
Examples
>>> import dask.array as da >>> a = da.arange(10, chunks=2).sum() >>> b = da.arange(10, chunks=2).mean() >>> compute(a, b) (45, 4.5)
By default, dask objects inside python collections will also be computed:
>>> compute({'a': a, 'b': b, 'c': 1}) # doctest: +SKIP ({'a': 45, 'b': 4.5, 'c': 1},)
-
dask.dataframe.
map_partitions
(func, *args, meta='__no_default__', enforce_metadata=True, transform_divisions=True, **kwargs)¶ Apply Python function on each DataFrame partition.
Parameters: func : function
Function applied to each partition.
args, kwargs :
Arguments and keywords to pass to the function. At least one of the args should be a Dask.dataframe. Arguments and keywords may contain
Scalar
,Delayed
or regular python objects. DataFrame-like args (both dask and pandas) will be repartitioned to align (if necessary) before applying the function.enforce_metadata : bool
Whether or not to enforce the structure of the metadata at runtime. This will rename and reorder columns for each partition, and will raise an error if this doesn’t work or types don’t match.
meta : pd.DataFrame, pd.Series, dict, iterable, tuple, optional
An empty
pd.DataFrame
orpd.Series
that matches the dtypes and column names of the output. This metadata is necessary for many algorithms in dask dataframe to work. For ease of use, some alternative inputs are also available. Instead of aDataFrame
, adict
of{name: dtype}
or iterable of(name, dtype)
can be provided (note that the order of the names should match the order of the columns). Instead of a series, a tuple of(name, dtype)
can be used. If not provided, dask will try to infer the metadata. This may lead to unexpected results, so providingmeta
is recommended. For more information, seedask.dataframe.utils.make_meta
.
-
dask.dataframe.
to_datetime
(arg, errors='raise', dayfirst=False, yearfirst=False, utc=None, box=True, format=None, exact=True, unit=None, infer_datetime_format=False, origin='unix', cache=True)¶ Convert argument to datetime.
Parameters: arg : integer, float, string, datetime, list, tuple, 1-d array, Series
New in version 0.18.1: or DataFrame/dict-like
errors : {‘ignore’, ‘raise’, ‘coerce’}, default ‘raise’
- If ‘raise’, then invalid parsing will raise an exception
- If ‘coerce’, then invalid parsing will be set as NaT
- If ‘ignore’, then invalid parsing will return the input
dayfirst : boolean, default False
Specify a date parse order if arg is str or its list-likes. If True, parses dates with the day first, eg 10/11/12 is parsed as 2012-11-10. Warning: dayfirst=True is not strict, but will prefer to parse with day first (this is a known bug, based on dateutil behavior).
yearfirst : boolean, default False
Specify a date parse order if arg is str or its list-likes.
- If True parses dates with the year first, eg 10/11/12 is parsed as 2010-11-12.
- If both dayfirst and yearfirst are True, yearfirst is preceded (same as dateutil).
Warning: yearfirst=True is not strict, but will prefer to parse with year first (this is a known bug, based on dateutil behavior).
New in version 0.16.1.
utc : boolean, default None
Return UTC DatetimeIndex if True (converting any tz-aware datetime.datetime objects as well).
box : boolean, default True
- If True returns a DatetimeIndex or Index-like object
- If False returns ndarray of values.
Deprecated since version 0.25.0: Use
Series.to_numpy()
orTimestamp.to_datetime64()
instead to get an ndarray of values or numpy.datetime64, respectively.format : string, default None
strftime to parse time, eg “%d/%m/%Y”, note that “%f” will parse all the way up to nanoseconds. See strftime documentation for more information on choices: https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior
exact : boolean, True by default
- If True, require an exact format match.
- If False, allow the format to match anywhere in the target string.
unit : string, default ‘ns’
unit of the arg (D,s,ms,us,ns) denote the unit, which is an integer or float number. This will be based off the origin. Example, with unit=’ms’ and origin=’unix’ (the default), this would calculate the number of milliseconds to the unix epoch start.
infer_datetime_format : boolean, default False
If True and no format is given, attempt to infer the format of the datetime strings, and if it can be inferred, switch to a faster method of parsing them. In some cases this can increase the parsing speed by ~5-10x.
origin : scalar, default is ‘unix’
Define the reference date. The numeric values would be parsed as number of units (defined by unit) since this reference date.
- If ‘unix’ (or POSIX) time; origin is set to 1970-01-01.
- If ‘julian’, unit must be ‘D’, and origin is set to beginning of Julian Calendar. Julian day number 0 is assigned to the day starting at noon on January 1, 4713 BC.
- If Timestamp convertible, origin is set to Timestamp identified by origin.
New in version 0.20.0.
cache : boolean, default True
If True, use a cache of unique, converted dates to apply the datetime conversion. May produce significant speed-up when parsing duplicate date strings, especially ones with timezone offsets.
New in version 0.23.0.
Changed in version 0.25.0: - changed default value from False to True
Returns: ret : datetime if parsing succeeded.
Return type depends on input:
- list-like: DatetimeIndex
- Series: Series of datetime64 dtype
- scalar: Timestamp
In case when it is not possible to return designated types (e.g. when any element of input is before Timestamp.min or after Timestamp.max) return will have datetime.datetime type (or corresponding array/Series).
See also
DataFrame.astype
- Cast argument to a specified dtype.
to_timedelta
- Convert argument to timedelta.
Examples
Assembling a datetime from multiple columns of a DataFrame. The keys can be common abbreviations like [‘year’, ‘month’, ‘day’, ‘minute’, ‘second’, ‘ms’, ‘us’, ‘ns’]) or plurals of the same
>>> df = pd.DataFrame({'year': [2015, 2016], ... 'month': [2, 3], ... 'day': [4, 5]}) >>> pd.to_datetime(df) 0 2015-02-04 1 2016-03-05 dtype: datetime64[ns]
If a date does not meet the timestamp limitations, passing errors=’ignore’ will return the original input instead of raising any exception.
Passing errors=’coerce’ will force an out-of-bounds date to NaT, in addition to forcing non-dates (or non-parseable dates) to NaT.
>>> pd.to_datetime('13000101', format='%Y%m%d', errors='ignore') datetime.datetime(1300, 1, 1, 0, 0) >>> pd.to_datetime('13000101', format='%Y%m%d', errors='coerce') NaT
Passing infer_datetime_format=True can often-times speedup a parsing if its not an ISO8601 format exactly, but in a regular format.
>>> s = pd.Series(['3/11/2000', '3/12/2000', '3/13/2000'] * 1000) >>> s.head() 0 3/11/2000 1 3/12/2000 2 3/13/2000 3 3/11/2000 4 3/12/2000 dtype: object
>>> %timeit pd.to_datetime(s,infer_datetime_format=True) # doctest: +SKIP 100 loops, best of 3: 10.4 ms per loop
>>> %timeit pd.to_datetime(s,infer_datetime_format=False) # doctest: +SKIP 1 loop, best of 3: 471 ms per loop
Using a unix epoch time
>>> pd.to_datetime(1490195805, unit='s') Timestamp('2017-03-22 15:16:45') >>> pd.to_datetime(1490195805433502912, unit='ns') Timestamp('2017-03-22 15:16:45.433502912')
Warning
For float arg, precision rounding might happen. To prevent unexpected behavior use a fixed-width exact type.
Using a non-unix epoch origin
>>> pd.to_datetime([1, 2, 3], unit='D', ... origin=pd.Timestamp('1960-01-01')) DatetimeIndex(['1960-01-02', '1960-01-03', '1960-01-04'], dtype='datetime64[ns]', freq=None)
-
dask.dataframe.multi.
concat
(dfs, axis=0, join='outer', interleave_partitions=False)¶ Concatenate DataFrames along rows.
- When axis=0 (default), concatenate DataFrames row-wise:
- If all divisions are known and ordered, concatenate DataFrames keeping divisions. When divisions are not ordered, specifying interleave_partition=True allows concatenate divisions each by each.
- If any of division is unknown, concatenate DataFrames resetting its division to unknown (None)
- When axis=1, concatenate DataFrames column-wise:
- Allowed if all divisions are known.
- If any of division is unknown, it raises ValueError.
Parameters: dfs : list
List of dask.DataFrames to be concatenated
axis : {0, 1, ‘index’, ‘columns’}, default 0
The axis to concatenate along
join : {‘inner’, ‘outer’}, default ‘outer’
How to handle indexes on other axis
interleave_partitions : bool, default False
Whether to concatenate DataFrames ignoring its order. If True, every divisions are concatenated each by each.
Notes
This differs in from
pd.concat
in the when concatenating Categoricals with different categories. Pandas currently coerces those to objects before concatenating. Coercing to objects is very expensive for large arrays, so dask preserves the Categoricals by taking the union of the categories.Examples
If all divisions are known and ordered, divisions are kept.
>>> a # doctest: +SKIP dd.DataFrame<x, divisions=(1, 3, 5)> >>> b # doctest: +SKIP dd.DataFrame<y, divisions=(6, 8, 10)> >>> dd.concat([a, b]) # doctest: +SKIP dd.DataFrame<concat-..., divisions=(1, 3, 6, 8, 10)>
Unable to concatenate if divisions are not ordered.
>>> a # doctest: +SKIP dd.DataFrame<x, divisions=(1, 3, 5)> >>> b # doctest: +SKIP dd.DataFrame<y, divisions=(2, 3, 6)> >>> dd.concat([a, b]) # doctest: +SKIP ValueError: All inputs have known divisions which cannot be concatenated in order. Specify interleave_partitions=True to ignore order
Specify interleave_partitions=True to ignore the division order.
>>> dd.concat([a, b], interleave_partitions=True) # doctest: +SKIP dd.DataFrame<concat-..., divisions=(1, 2, 3, 5, 6)>
If any of division is unknown, the result division will be unknown
>>> a # doctest: +SKIP dd.DataFrame<x, divisions=(None, None)> >>> b # doctest: +SKIP dd.DataFrame<y, divisions=(1, 4, 10)> >>> dd.concat([a, b]) # doctest: +SKIP dd.DataFrame<concat-..., divisions=(None, None, None, None)>
Different categoricals are unioned
>> dd.concat([ # doctest: +SKIP … dd.from_pandas(pd.Series([‘a’, ‘b’], dtype=’category’), 1), … dd.from_pandas(pd.Series([‘a’, ‘c’], dtype=’category’), 1), … ], interleave_partitions=True).dtype CategoricalDtype(categories=[‘a’, ‘b’, ‘c’], ordered=False)
- When axis=0 (default), concatenate DataFrames row-wise:
-
dask.dataframe.multi.
merge
(left, right, how='inner', on=None, left_on=None, right_on=None, left_index=False, right_index=False, sort=False, suffixes=('_x', '_y'), copy=True, indicator=False, validate=None)¶ Merge DataFrame or named Series objects with a database-style join.
The join is done on columns or indexes. If joining columns on columns, the DataFrame indexes will be ignored. Otherwise if joining indexes on indexes or indexes on a column or columns, the index will be passed on.
Parameters: left : DataFrame
right : DataFrame or named Series
Object to merge with.
how : {‘left’, ‘right’, ‘outer’, ‘inner’}, default ‘inner’
Type of merge to be performed.
- left: use only keys from left frame, similar to a SQL left outer join; preserve key order.
- right: use only keys from right frame, similar to a SQL right outer join; preserve key order.
- outer: use union of keys from both frames, similar to a SQL full outer join; sort keys lexicographically.
- inner: use intersection of keys from both frames, similar to a SQL inner join; preserve the order of the left keys.
on : label or list
Column or index level names to join on. These must be found in both DataFrames. If on is None and not merging on indexes then this defaults to the intersection of the columns in both DataFrames.
left_on : label or list, or array-like
Column or index level names to join on in the left DataFrame. Can also be an array or list of arrays of the length of the left DataFrame. These arrays are treated as if they are columns.
right_on : label or list, or array-like
Column or index level names to join on in the right DataFrame. Can also be an array or list of arrays of the length of the right DataFrame. These arrays are treated as if they are columns.
left_index : bool, default False
Use the index from the left DataFrame as the join key(s). If it is a MultiIndex, the number of keys in the other DataFrame (either the index or a number of columns) must match the number of levels.
right_index : bool, default False
Use the index from the right DataFrame as the join key. Same caveats as left_index.
sort : bool, default False
Sort the join keys lexicographically in the result DataFrame. If False, the order of the join keys depends on the join type (how keyword).
suffixes : tuple of (str, str), default (‘_x’, ‘_y’)
Suffix to apply to overlapping column names in the left and right side, respectively. To raise an exception on overlapping columns use (False, False).
copy : bool, default True
If False, avoid copy if possible.
indicator : bool or str, default False
If True, adds a column to output DataFrame called “_merge” with information on the source of each row. If string, column with information on source of each row will be added to output DataFrame, and column will be named value of string. Information column is Categorical-type and takes on a value of “left_only” for observations whose merge key only appears in ‘left’ DataFrame, “right_only” for observations whose merge key only appears in ‘right’ DataFrame, and “both” if the observation’s merge key is found in both.
validate : str, optional
If specified, checks if merge is of specified type.
- “one_to_one” or “1:1”: check if merge keys are unique in both left and right datasets.
- “one_to_many” or “1:m”: check if merge keys are unique in left dataset.
- “many_to_one” or “m:1”: check if merge keys are unique in right dataset.
- “many_to_many” or “m:m”: allowed, but does not result in checks.
New in version 0.21.0.
Returns: DataFrame
A DataFrame of the two merged objects.
See also
merge_ordered
- Merge with optional filling/interpolation.
merge_asof
- Merge on nearest keys.
DataFrame.join
- Similar method using indices.
Notes
Support for specifying index levels as the on, left_on, and right_on parameters was added in version 0.23.0 Support for merging named Series objects was added in version 0.24.0
Examples
>>> df1 = pd.DataFrame({'lkey': ['foo', 'bar', 'baz', 'foo'], ... 'value': [1, 2, 3, 5]}) >>> df2 = pd.DataFrame({'rkey': ['foo', 'bar', 'baz', 'foo'], ... 'value': [5, 6, 7, 8]}) >>> df1 lkey value 0 foo 1 1 bar 2 2 baz 3 3 foo 5 >>> df2 rkey value 0 foo 5 1 bar 6 2 baz 7 3 foo 8
Merge df1 and df2 on the lkey and rkey columns. The value columns have the default suffixes, _x and _y, appended.
>>> df1.merge(df2, left_on='lkey', right_on='rkey') lkey value_x rkey value_y 0 foo 1 foo 5 1 foo 1 foo 8 2 foo 5 foo 5 3 foo 5 foo 8 4 bar 2 bar 6 5 baz 3 baz 7
Merge DataFrames df1 and df2 with specified left and right suffixes appended to any overlapping columns.
>>> df1.merge(df2, left_on='lkey', right_on='rkey', ... suffixes=('_left', '_right')) lkey value_left rkey value_right 0 foo 1 foo 5 1 foo 1 foo 8 2 foo 5 foo 5 3 foo 5 foo 8 4 bar 2 bar 6 5 baz 3 baz 7
Merge DataFrames df1 and df2, but raise an exception if the DataFrames have any overlapping columns.
>>> df1.merge(df2, left_on='lkey', right_on='rkey', suffixes=(False, False)) Traceback (most recent call last): ... ValueError: columns overlap but no suffix specified: Index(['value'], dtype='object')
-
dask.dataframe.multi.
merge_asof
(left, right, on=None, left_on=None, right_on=None, left_index=False, right_index=False, by=None, left_by=None, right_by=None, suffixes=('_x', '_y'), tolerance=None, allow_exact_matches=True, direction='backward')¶ Perform an asof merge. This is similar to a left-join except that we match on nearest key rather than equal keys.
Both DataFrames must be sorted by the key.
For each row in the left DataFrame:
- A “backward” search selects the last row in the right DataFrame whose ‘on’ key is less than or equal to the left’s key.
- A “forward” search selects the first row in the right DataFrame whose ‘on’ key is greater than or equal to the left’s key.
- A “nearest” search selects the row in the right DataFrame whose ‘on’ key is closest in absolute distance to the left’s key.
The default is “backward” and is compatible in versions below 0.20.0. The direction parameter was added in version 0.20.0 and introduces “forward” and “nearest”.
Optionally match on equivalent keys with ‘by’ before searching with ‘on’.
New in version 0.19.0.
Parameters: left : DataFrame
right : DataFrame
on : label
Field name to join on. Must be found in both DataFrames. The data MUST be ordered. Furthermore this must be a numeric column, such as datetimelike, integer, or float. On or left_on/right_on must be given.
left_on : label
Field name to join on in left DataFrame.
right_on : label
Field name to join on in right DataFrame.
left_index : boolean
Use the index of the left DataFrame as the join key.
New in version 0.19.2.
right_index : boolean
Use the index of the right DataFrame as the join key.
New in version 0.19.2.
by : column name or list of column names
Match on these columns before performing merge operation.
left_by : column name
Field names to match on in the left DataFrame.
New in version 0.19.2.
right_by : column name
Field names to match on in the right DataFrame.
New in version 0.19.2.
suffixes : 2-length sequence (tuple, list, …)
Suffix to apply to overlapping column names in the left and right side, respectively.
tolerance : integer or Timedelta, optional, default None
Select asof tolerance within this range; must be compatible with the merge index.
allow_exact_matches : boolean, default True
- If True, allow matching with the same ‘on’ value (i.e. less-than-or-equal-to / greater-than-or-equal-to)
- If False, don’t match the same ‘on’ value (i.e., strictly less-than / strictly greater-than)
direction : ‘backward’ (default), ‘forward’, or ‘nearest’
Whether to search for prior, subsequent, or closest matches.
New in version 0.20.0.
Returns: merged : DataFrame
See also
merge
,merge_ordered
Examples
>>> left = pd.DataFrame({'a': [1, 5, 10], 'left_val': ['a', 'b', 'c']}) >>> left a left_val 0 1 a 1 5 b 2 10 c
>>> right = pd.DataFrame({'a': [1, 2, 3, 6, 7], ... 'right_val': [1, 2, 3, 6, 7]}) >>> right a right_val 0 1 1 1 2 2 2 3 3 3 6 6 4 7 7
>>> pd.merge_asof(left, right, on='a') a left_val right_val 0 1 a 1 1 5 b 3 2 10 c 7
>>> pd.merge_asof(left, right, on='a', allow_exact_matches=False) a left_val right_val 0 1 a NaN 1 5 b 3.0 2 10 c 7.0
>>> pd.merge_asof(left, right, on='a', direction='forward') a left_val right_val 0 1 a 1.0 1 5 b 6.0 2 10 c NaN
>>> pd.merge_asof(left, right, on='a', direction='nearest') a left_val right_val 0 1 a 1 1 5 b 6 2 10 c 7
We can use indexed DataFrames as well.
>>> left = pd.DataFrame({'left_val': ['a', 'b', 'c']}, index=[1, 5, 10]) >>> left left_val 1 a 5 b 10 c
>>> right = pd.DataFrame({'right_val': [1, 2, 3, 6, 7]}, ... index=[1, 2, 3, 6, 7]) >>> right right_val 1 1 2 2 3 3 6 6 7 7
>>> pd.merge_asof(left, right, left_index=True, right_index=True) left_val right_val 1 a 1 5 b 3 10 c 7
Here is a real-world times-series example
>>> quotes time ticker bid ask 0 2016-05-25 13:30:00.023 GOOG 720.50 720.93 1 2016-05-25 13:30:00.023 MSFT 51.95 51.96 2 2016-05-25 13:30:00.030 MSFT 51.97 51.98 3 2016-05-25 13:30:00.041 MSFT 51.99 52.00 4 2016-05-25 13:30:00.048 GOOG 720.50 720.93 5 2016-05-25 13:30:00.049 AAPL 97.99 98.01 6 2016-05-25 13:30:00.072 GOOG 720.50 720.88 7 2016-05-25 13:30:00.075 MSFT 52.01 52.03
>>> trades time ticker price quantity 0 2016-05-25 13:30:00.023 MSFT 51.95 75 1 2016-05-25 13:30:00.038 MSFT 51.95 155 2 2016-05-25 13:30:00.048 GOOG 720.77 100 3 2016-05-25 13:30:00.048 GOOG 720.92 100 4 2016-05-25 13:30:00.048 AAPL 98.00 100
By default we are taking the asof of the quotes
>>> pd.merge_asof(trades, quotes, ... on='time', ... by='ticker') time ticker price quantity bid ask 0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96 1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98 2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93 3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93 4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 2ms between the quote time and the trade time
>>> pd.merge_asof(trades, quotes, ... on='time', ... by='ticker', ... tolerance=pd.Timedelta('2ms')) time ticker price quantity bid ask 0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96 1 2016-05-25 13:30:00.038 MSFT 51.95 155 NaN NaN 2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93 3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93 4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
We only asof within 10ms between the quote time and the trade time and we exclude exact matches on time. However prior data will propagate forward
>>> pd.merge_asof(trades, quotes, ... on='time', ... by='ticker', ... tolerance=pd.Timedelta('10ms'), ... allow_exact_matches=False) time ticker price quantity bid ask 0 2016-05-25 13:30:00.023 MSFT 51.95 75 NaN NaN 1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98 2 2016-05-25 13:30:00.048 GOOG 720.77 100 NaN NaN 3 2016-05-25 13:30:00.048 GOOG 720.92 100 NaN NaN 4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
-
dask.dataframe.reshape.
get_dummies
(data, prefix=None, prefix_sep='_', dummy_na=False, columns=None, sparse=False, drop_first=False, dtype=<class 'numpy.uint8'>, **kwargs)¶ Convert categorical variable into dummy/indicator variables.
Data must have category dtype to infer result’s
columns
.Parameters: data : Series, or DataFrame
For Series, the dtype must be categorical. For DataFrame, at least one column must be categorical.
prefix : string, list of strings, or dict of strings, default None
String to append DataFrame column names. Pass a list with length equal to the number of columns when calling get_dummies on a DataFrame. Alternatively, prefix can be a dictionary mapping column names to prefixes.
prefix_sep : string, default ‘_’
If appending prefix, separator/delimiter to use. Or pass a list or dictionary as with prefix.
dummy_na : bool, default False
Add a column to indicate NaNs, if False NaNs are ignored.
columns : list-like, default None
Column names in the DataFrame to be encoded. If columns is None then all the columns with category dtype will be converted.
sparse : bool, default False
Whether the dummy columns should be sparse or not. Returns SparseDataFrame if data is a Series or if all columns are included. Otherwise returns a DataFrame with some SparseBlocks.
New in version 0.18.2.
drop_first : bool, default False
Whether to get k-1 dummies out of k categorical levels by removing the first level.
dtype : dtype, default np.uint8
Data type for new columns. Only a single dtype is allowed. Only valid if pandas is 0.23.0 or newer.
New in version 0.18.2.
Returns: dummies : DataFrame
See also
Examples
Dask’s version only works with Categorical data, as this is the only way to know the output shape without computing all the data.
>>> import pandas as pd >>> import dask.dataframe as dd >>> s = dd.from_pandas(pd.Series(list('abca')), npartitions=2) >>> dd.get_dummies(s) Traceback (most recent call last): ... NotImplementedError: `get_dummies` with non-categorical dtypes is not supported...
With categorical data:
>>> s = dd.from_pandas(pd.Series(list('abca'), dtype='category'), npartitions=2) >>> dd.get_dummies(s) # doctest: +NORMALIZE_WHITESPACE Dask DataFrame Structure: a b c npartitions=2 0 uint8 uint8 uint8 2 ... ... ... 3 ... ... ... Dask Name: get_dummies, 4 tasks >>> dd.get_dummies(s).compute() # doctest: +ELLIPSIS a b c 0 1 0 0 1 0 1 0 2 0 0 1 3 1 0 0
-
dask.dataframe.reshape.
pivot_table
(df, index=None, columns=None, values=None, aggfunc='mean')¶ Create a spreadsheet-style pivot table as a DataFrame. Target
columns
must have category dtype to infer result’scolumns
.index
,columns
,values
andaggfunc
must be all scalar.Parameters: df : DataFrame
index : scalar
column to be index
columns : scalar
column to be columns
values : scalar
column to aggregate
aggfunc : {‘mean’, ‘sum’, ‘count’}, default ‘mean’
Returns: table : DataFrame
See also
-
dask.dataframe.reshape.
melt
(frame, id_vars=None, value_vars=None, var_name=None, value_name='value', col_level=None)¶ Unpivots a DataFrame from wide format to long format, optionally leaving identifier variables set.
This function is useful to massage a DataFrame into a format where one or more columns are identifier variables (
id_vars
), while all other columns, considered measured variables (value_vars
), are “unpivoted” to the row axis, leaving just two non-identifier columns, ‘variable’ and ‘value’.Parameters: frame : DataFrame
id_vars : tuple, list, or ndarray, optional
Column(s) to use as identifier variables.
value_vars : tuple, list, or ndarray, optional
Column(s) to unpivot. If not specified, uses all columns that are not set as id_vars.
var_name : scalar
Name to use for the ‘variable’ column. If None it uses
frame.columns.name
or ‘variable’.value_name : scalar, default ‘value’
Name to use for the ‘value’ column.
col_level : int or string, optional
If columns are a MultiIndex then use this level to melt.
Returns: DataFrame
Unpivoted DataFrame.
See also